00:00:00.001 Started by upstream project "autotest-per-patch" build number 132332 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.083 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.084 The recommended git tool is: git 00:00:00.084 using credential 00000000-0000-0000-0000-000000000002 00:00:00.086 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.112 Fetching changes from the remote Git repository 00:00:00.114 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.151 Using shallow fetch with depth 1 00:00:00.151 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.151 > git --version # timeout=10 00:00:00.195 > git --version # 'git version 2.39.2' 00:00:00.195 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.222 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.222 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:11.543 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:11.555 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:11.571 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:11.571 > git config core.sparsecheckout # timeout=10 00:00:11.584 > git read-tree -mu HEAD # timeout=10 00:00:11.601 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:11.622 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:11.622 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:11.713 [Pipeline] Start of Pipeline 00:00:11.726 [Pipeline] library 00:00:11.728 Loading library shm_lib@master 00:00:11.728 Library shm_lib@master is cached. Copying from home. 00:00:11.745 [Pipeline] node 00:00:11.755 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:11.757 [Pipeline] { 00:00:11.766 [Pipeline] catchError 00:00:11.768 [Pipeline] { 00:00:11.776 [Pipeline] wrap 00:00:11.783 [Pipeline] { 00:00:11.788 [Pipeline] stage 00:00:11.789 [Pipeline] { (Prologue) 00:00:11.802 [Pipeline] echo 00:00:11.803 Node: VM-host-SM0 00:00:11.808 [Pipeline] cleanWs 00:00:11.817 [WS-CLEANUP] Deleting project workspace... 00:00:11.817 [WS-CLEANUP] Deferred wipeout is used... 00:00:11.823 [WS-CLEANUP] done 00:00:12.043 [Pipeline] setCustomBuildProperty 00:00:12.155 [Pipeline] httpRequest 00:00:12.528 [Pipeline] echo 00:00:12.529 Sorcerer 10.211.164.20 is alive 00:00:12.538 [Pipeline] retry 00:00:12.540 [Pipeline] { 00:00:12.553 [Pipeline] httpRequest 00:00:12.556 HttpMethod: GET 00:00:12.557 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.557 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.574 Response Code: HTTP/1.1 200 OK 00:00:12.574 Success: Status code 200 is in the accepted range: 200,404 00:00:12.575 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.907 [Pipeline] } 00:00:16.924 [Pipeline] // retry 00:00:16.931 [Pipeline] sh 00:00:17.211 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:17.226 [Pipeline] httpRequest 00:00:17.894 [Pipeline] echo 00:00:17.896 Sorcerer 10.211.164.20 is alive 00:00:17.904 [Pipeline] retry 00:00:17.906 [Pipeline] { 00:00:17.919 [Pipeline] httpRequest 00:00:17.923 HttpMethod: GET 00:00:17.924 URL: http://10.211.164.20/packages/spdk_f22e807f197b361787d55ef3f148db33139db671.tar.gz 00:00:17.924 Sending request to url: http://10.211.164.20/packages/spdk_f22e807f197b361787d55ef3f148db33139db671.tar.gz 00:00:17.926 Response Code: HTTP/1.1 200 OK 00:00:17.926 Success: Status code 200 is in the accepted range: 200,404 00:00:17.927 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk_f22e807f197b361787d55ef3f148db33139db671.tar.gz 00:00:41.653 [Pipeline] } 00:00:41.671 [Pipeline] // retry 00:00:41.678 [Pipeline] sh 00:00:41.960 + tar --no-same-owner -xf spdk_f22e807f197b361787d55ef3f148db33139db671.tar.gz 00:00:44.504 [Pipeline] sh 00:00:44.785 + git -C spdk log --oneline -n5 00:00:44.785 f22e807f1 test/autobuild: bump minimum version of intel-ipsec-mb 00:00:44.785 8d982eda9 dpdk: add adjustments for recent rte_power changes 00:00:44.785 dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:00:44.785 73f18e890 lib/reduce: fix the magic number of empty mapping detection. 00:00:44.785 029355612 bdev_ut: add manual examine bdev unit test case 00:00:44.806 [Pipeline] writeFile 00:00:44.822 [Pipeline] sh 00:00:45.104 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:45.116 [Pipeline] sh 00:00:45.397 + cat autorun-spdk.conf 00:00:45.397 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:45.397 SPDK_TEST_NVMF=1 00:00:45.397 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:45.397 SPDK_TEST_USDT=1 00:00:45.397 SPDK_TEST_NVMF_MDNS=1 00:00:45.397 SPDK_RUN_UBSAN=1 00:00:45.397 NET_TYPE=virt 00:00:45.397 SPDK_JSONRPC_GO_CLIENT=1 00:00:45.397 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:45.404 RUN_NIGHTLY=0 00:00:45.406 [Pipeline] } 00:00:45.420 [Pipeline] // stage 00:00:45.435 [Pipeline] stage 00:00:45.437 [Pipeline] { (Run VM) 00:00:45.450 [Pipeline] sh 00:00:45.729 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:45.729 + echo 'Start stage prepare_nvme.sh' 00:00:45.729 Start stage prepare_nvme.sh 00:00:45.729 + [[ -n 3 ]] 00:00:45.729 + disk_prefix=ex3 00:00:45.729 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 ]] 00:00:45.729 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf ]] 00:00:45.729 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf 00:00:45.729 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:45.729 ++ SPDK_TEST_NVMF=1 00:00:45.729 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:45.729 ++ SPDK_TEST_USDT=1 00:00:45.729 ++ SPDK_TEST_NVMF_MDNS=1 00:00:45.730 ++ SPDK_RUN_UBSAN=1 00:00:45.730 ++ NET_TYPE=virt 00:00:45.730 ++ SPDK_JSONRPC_GO_CLIENT=1 00:00:45.730 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:45.730 ++ RUN_NIGHTLY=0 00:00:45.730 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:45.730 + nvme_files=() 00:00:45.730 + declare -A nvme_files 00:00:45.730 + backend_dir=/var/lib/libvirt/images/backends 00:00:45.730 + nvme_files['nvme.img']=5G 00:00:45.730 + nvme_files['nvme-cmb.img']=5G 00:00:45.730 + nvme_files['nvme-multi0.img']=4G 00:00:45.730 + nvme_files['nvme-multi1.img']=4G 00:00:45.730 + nvme_files['nvme-multi2.img']=4G 00:00:45.730 + nvme_files['nvme-openstack.img']=8G 00:00:45.730 + nvme_files['nvme-zns.img']=5G 00:00:45.730 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:45.730 + (( SPDK_TEST_FTL == 1 )) 00:00:45.730 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:45.730 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:45.730 + for nvme in "${!nvme_files[@]}" 00:00:45.730 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:00:45.730 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:45.730 + for nvme in "${!nvme_files[@]}" 00:00:45.730 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:00:45.730 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:45.730 + for nvme in "${!nvme_files[@]}" 00:00:45.730 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:00:45.730 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:45.730 + for nvme in "${!nvme_files[@]}" 00:00:45.730 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:00:45.730 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:45.730 + for nvme in "${!nvme_files[@]}" 00:00:45.730 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:00:45.730 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:45.730 + for nvme in "${!nvme_files[@]}" 00:00:45.730 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:00:45.730 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:45.730 + for nvme in "${!nvme_files[@]}" 00:00:45.730 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:00:45.988 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:45.988 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:00:45.988 + echo 'End stage prepare_nvme.sh' 00:00:45.988 End stage prepare_nvme.sh 00:00:46.000 [Pipeline] sh 00:00:46.330 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:46.330 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:00:46.330 00:00:46.330 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant 00:00:46.330 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk 00:00:46.330 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:46.330 HELP=0 00:00:46.330 DRY_RUN=0 00:00:46.330 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:00:46.330 NVME_DISKS_TYPE=nvme,nvme, 00:00:46.330 NVME_AUTO_CREATE=0 00:00:46.330 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:00:46.330 NVME_CMB=,, 00:00:46.330 NVME_PMR=,, 00:00:46.330 NVME_ZNS=,, 00:00:46.330 NVME_MS=,, 00:00:46.330 NVME_FDP=,, 00:00:46.330 SPDK_VAGRANT_DISTRO=fedora39 00:00:46.330 SPDK_VAGRANT_VMCPU=10 00:00:46.330 SPDK_VAGRANT_VMRAM=12288 00:00:46.330 SPDK_VAGRANT_PROVIDER=libvirt 00:00:46.330 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:46.330 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:46.330 SPDK_OPENSTACK_NETWORK=0 00:00:46.330 VAGRANT_PACKAGE_BOX=0 00:00:46.330 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:46.330 FORCE_DISTRO=true 00:00:46.330 VAGRANT_BOX_VERSION= 00:00:46.330 EXTRA_VAGRANTFILES= 00:00:46.330 NIC_MODEL=e1000 00:00:46.330 00:00:46.330 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt' 00:00:46.330 /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:48.854 Bringing machine 'default' up with 'libvirt' provider... 00:00:49.788 ==> default: Creating image (snapshot of base box volume). 00:00:50.047 ==> default: Creating domain with the following settings... 00:00:50.047 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732035581_2cdb18f75ab82403b6b1 00:00:50.047 ==> default: -- Domain type: kvm 00:00:50.047 ==> default: -- Cpus: 10 00:00:50.047 ==> default: -- Feature: acpi 00:00:50.047 ==> default: -- Feature: apic 00:00:50.047 ==> default: -- Feature: pae 00:00:50.047 ==> default: -- Memory: 12288M 00:00:50.047 ==> default: -- Memory Backing: hugepages: 00:00:50.047 ==> default: -- Management MAC: 00:00:50.047 ==> default: -- Loader: 00:00:50.047 ==> default: -- Nvram: 00:00:50.047 ==> default: -- Base box: spdk/fedora39 00:00:50.047 ==> default: -- Storage pool: default 00:00:50.047 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732035581_2cdb18f75ab82403b6b1.img (20G) 00:00:50.047 ==> default: -- Volume Cache: default 00:00:50.047 ==> default: -- Kernel: 00:00:50.047 ==> default: -- Initrd: 00:00:50.047 ==> default: -- Graphics Type: vnc 00:00:50.047 ==> default: -- Graphics Port: -1 00:00:50.047 ==> default: -- Graphics IP: 127.0.0.1 00:00:50.047 ==> default: -- Graphics Password: Not defined 00:00:50.047 ==> default: -- Video Type: cirrus 00:00:50.047 ==> default: -- Video VRAM: 9216 00:00:50.047 ==> default: -- Sound Type: 00:00:50.047 ==> default: -- Keymap: en-us 00:00:50.047 ==> default: -- TPM Path: 00:00:50.047 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:50.047 ==> default: -- Command line args: 00:00:50.047 ==> default: -> value=-device, 00:00:50.047 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:50.047 ==> default: -> value=-drive, 00:00:50.047 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:00:50.047 ==> default: -> value=-device, 00:00:50.047 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:50.047 ==> default: -> value=-device, 00:00:50.047 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:50.047 ==> default: -> value=-drive, 00:00:50.047 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:50.047 ==> default: -> value=-device, 00:00:50.047 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:50.047 ==> default: -> value=-drive, 00:00:50.047 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:50.047 ==> default: -> value=-device, 00:00:50.047 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:50.047 ==> default: -> value=-drive, 00:00:50.047 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:50.047 ==> default: -> value=-device, 00:00:50.047 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:50.047 ==> default: Creating shared folders metadata... 00:00:50.047 ==> default: Starting domain. 00:00:51.955 ==> default: Waiting for domain to get an IP address... 00:01:06.833 ==> default: Waiting for SSH to become available... 00:01:08.213 ==> default: Configuring and enabling network interfaces... 00:01:12.403 default: SSH address: 192.168.121.102:22 00:01:12.403 default: SSH username: vagrant 00:01:12.403 default: SSH auth method: private key 00:01:14.937 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:23.078 ==> default: Mounting SSHFS shared folder... 00:01:24.016 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:24.016 ==> default: Checking Mount.. 00:01:25.393 ==> default: Folder Successfully Mounted! 00:01:25.393 ==> default: Running provisioner: file... 00:01:26.329 default: ~/.gitconfig => .gitconfig 00:01:26.587 00:01:26.587 SUCCESS! 00:01:26.587 00:01:26.587 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:26.587 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:26.587 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:26.587 00:01:26.597 [Pipeline] } 00:01:26.614 [Pipeline] // stage 00:01:26.623 [Pipeline] dir 00:01:26.624 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt 00:01:26.626 [Pipeline] { 00:01:26.641 [Pipeline] catchError 00:01:26.643 [Pipeline] { 00:01:26.657 [Pipeline] sh 00:01:26.937 + vagrant ssh-config --host vagrant 00:01:26.937 + sed -ne /^Host/,$p 00:01:26.937 + tee ssh_conf 00:01:30.225 Host vagrant 00:01:30.225 HostName 192.168.121.102 00:01:30.225 User vagrant 00:01:30.225 Port 22 00:01:30.225 UserKnownHostsFile /dev/null 00:01:30.225 StrictHostKeyChecking no 00:01:30.225 PasswordAuthentication no 00:01:30.225 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:30.225 IdentitiesOnly yes 00:01:30.225 LogLevel FATAL 00:01:30.225 ForwardAgent yes 00:01:30.225 ForwardX11 yes 00:01:30.225 00:01:30.239 [Pipeline] withEnv 00:01:30.242 [Pipeline] { 00:01:30.258 [Pipeline] sh 00:01:30.546 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:30.546 source /etc/os-release 00:01:30.546 [[ -e /image.version ]] && img=$(< /image.version) 00:01:30.546 # Minimal, systemd-like check. 00:01:30.546 if [[ -e /.dockerenv ]]; then 00:01:30.546 # Clear garbage from the node's name: 00:01:30.546 # agt-er_autotest_547-896 -> autotest_547-896 00:01:30.546 # $HOSTNAME is the actual container id 00:01:30.546 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:30.546 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:30.546 # We can assume this is a mount from a host where container is running, 00:01:30.546 # so fetch its hostname to easily identify the target swarm worker. 00:01:30.546 container="$(< /etc/hostname) ($agent)" 00:01:30.546 else 00:01:30.546 # Fallback 00:01:30.546 container=$agent 00:01:30.546 fi 00:01:30.546 fi 00:01:30.546 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:30.546 00:01:30.824 [Pipeline] } 00:01:30.843 [Pipeline] // withEnv 00:01:30.852 [Pipeline] setCustomBuildProperty 00:01:30.869 [Pipeline] stage 00:01:30.872 [Pipeline] { (Tests) 00:01:30.891 [Pipeline] sh 00:01:31.230 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:31.244 [Pipeline] sh 00:01:31.525 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:31.540 [Pipeline] timeout 00:01:31.540 Timeout set to expire in 1 hr 0 min 00:01:31.542 [Pipeline] { 00:01:31.556 [Pipeline] sh 00:01:31.837 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:32.404 HEAD is now at f22e807f1 test/autobuild: bump minimum version of intel-ipsec-mb 00:01:32.416 [Pipeline] sh 00:01:32.697 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:32.969 [Pipeline] sh 00:01:33.249 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:33.523 [Pipeline] sh 00:01:33.804 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:01:34.063 ++ readlink -f spdk_repo 00:01:34.063 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:34.063 + [[ -n /home/vagrant/spdk_repo ]] 00:01:34.063 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:34.063 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:34.063 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:34.063 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:34.063 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:34.063 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:01:34.063 + cd /home/vagrant/spdk_repo 00:01:34.063 + source /etc/os-release 00:01:34.063 ++ NAME='Fedora Linux' 00:01:34.063 ++ VERSION='39 (Cloud Edition)' 00:01:34.063 ++ ID=fedora 00:01:34.063 ++ VERSION_ID=39 00:01:34.063 ++ VERSION_CODENAME= 00:01:34.063 ++ PLATFORM_ID=platform:f39 00:01:34.063 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:34.063 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:34.063 ++ LOGO=fedora-logo-icon 00:01:34.063 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:34.063 ++ HOME_URL=https://fedoraproject.org/ 00:01:34.063 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:34.063 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:34.063 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:34.063 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:34.063 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:34.063 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:34.063 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:34.063 ++ SUPPORT_END=2024-11-12 00:01:34.063 ++ VARIANT='Cloud Edition' 00:01:34.063 ++ VARIANT_ID=cloud 00:01:34.063 + uname -a 00:01:34.063 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:34.063 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:34.322 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:34.322 Hugepages 00:01:34.322 node hugesize free / total 00:01:34.322 node0 1048576kB 0 / 0 00:01:34.322 node0 2048kB 0 / 0 00:01:34.322 00:01:34.322 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:34.581 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:34.581 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:34.581 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:34.581 + rm -f /tmp/spdk-ld-path 00:01:34.581 + source autorun-spdk.conf 00:01:34.581 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:34.581 ++ SPDK_TEST_NVMF=1 00:01:34.581 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:34.581 ++ SPDK_TEST_USDT=1 00:01:34.581 ++ SPDK_TEST_NVMF_MDNS=1 00:01:34.581 ++ SPDK_RUN_UBSAN=1 00:01:34.581 ++ NET_TYPE=virt 00:01:34.581 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:34.581 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:34.581 ++ RUN_NIGHTLY=0 00:01:34.581 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:34.581 + [[ -n '' ]] 00:01:34.581 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:34.581 + for M in /var/spdk/build-*-manifest.txt 00:01:34.581 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:34.581 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:34.581 + for M in /var/spdk/build-*-manifest.txt 00:01:34.581 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:34.581 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:34.581 + for M in /var/spdk/build-*-manifest.txt 00:01:34.581 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:34.581 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:34.581 ++ uname 00:01:34.581 + [[ Linux == \L\i\n\u\x ]] 00:01:34.581 + sudo dmesg -T 00:01:34.581 + sudo dmesg --clear 00:01:34.581 + dmesg_pid=5260 00:01:34.581 + [[ Fedora Linux == FreeBSD ]] 00:01:34.581 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:34.581 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:34.581 + sudo dmesg -Tw 00:01:34.581 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:34.581 + [[ -x /usr/src/fio-static/fio ]] 00:01:34.581 + export FIO_BIN=/usr/src/fio-static/fio 00:01:34.581 + FIO_BIN=/usr/src/fio-static/fio 00:01:34.581 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:34.581 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:34.581 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:34.581 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:34.581 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:34.581 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:34.581 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:34.581 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:34.581 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:34.581 17:00:27 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:34.581 17:00:27 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:34.581 17:00:27 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:34.581 17:00:27 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:34.581 17:00:27 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:34.581 17:00:27 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_USDT=1 00:01:34.581 17:00:27 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_MDNS=1 00:01:34.581 17:00:27 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:34.581 17:00:27 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:01:34.581 17:00:27 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_JSONRPC_GO_CLIENT=1 00:01:34.581 17:00:27 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:34.581 17:00:27 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:01:34.581 17:00:27 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:34.581 17:00:27 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:34.840 17:00:27 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:34.841 17:00:27 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:34.841 17:00:27 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:34.841 17:00:27 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:34.841 17:00:27 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:34.841 17:00:27 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:34.841 17:00:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:34.841 17:00:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:34.841 17:00:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:34.841 17:00:27 -- paths/export.sh@5 -- $ export PATH 00:01:34.841 17:00:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:34.841 17:00:27 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:34.841 17:00:27 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:34.841 17:00:27 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732035627.XXXXXX 00:01:34.841 17:00:27 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732035627.qz8kNy 00:01:34.841 17:00:27 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:34.841 17:00:27 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:34.841 17:00:27 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:34.841 17:00:27 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:34.841 17:00:27 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:34.841 17:00:27 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:34.841 17:00:27 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:34.841 17:00:27 -- common/autotest_common.sh@10 -- $ set +x 00:01:34.841 17:00:27 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:01:34.841 17:00:27 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:34.841 17:00:27 -- pm/common@17 -- $ local monitor 00:01:34.841 17:00:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:34.841 17:00:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:34.841 17:00:27 -- pm/common@25 -- $ sleep 1 00:01:34.841 17:00:27 -- pm/common@21 -- $ date +%s 00:01:34.841 17:00:27 -- pm/common@21 -- $ date +%s 00:01:34.841 17:00:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732035627 00:01:34.841 17:00:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732035627 00:01:34.841 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732035627_collect-cpu-load.pm.log 00:01:34.841 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732035627_collect-vmstat.pm.log 00:01:35.777 17:00:28 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:35.777 17:00:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:35.777 17:00:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:35.777 17:00:28 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:35.777 17:00:28 -- spdk/autobuild.sh@16 -- $ date -u 00:01:35.777 Tue Nov 19 05:00:28 PM UTC 2024 00:01:35.777 17:00:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:35.777 v25.01-pre-199-gf22e807f1 00:01:35.777 17:00:28 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:35.777 17:00:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:35.777 17:00:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:35.777 17:00:28 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:35.777 17:00:28 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:35.777 17:00:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:35.777 ************************************ 00:01:35.777 START TEST ubsan 00:01:35.777 ************************************ 00:01:35.777 using ubsan 00:01:35.777 17:00:28 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:35.777 00:01:35.777 real 0m0.000s 00:01:35.777 user 0m0.000s 00:01:35.777 sys 0m0.000s 00:01:35.777 17:00:28 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:35.777 17:00:28 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:35.777 ************************************ 00:01:35.777 END TEST ubsan 00:01:35.777 ************************************ 00:01:35.777 17:00:28 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:35.777 17:00:28 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:35.777 17:00:28 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:35.777 17:00:28 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:35.777 17:00:28 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:35.777 17:00:28 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:35.777 17:00:28 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:35.777 17:00:28 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:35.778 17:00:28 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:01:36.037 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:36.037 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:36.604 Using 'verbs' RDMA provider 00:01:49.381 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:04.267 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:04.267 go version go1.21.1 linux/amd64 00:02:04.267 Creating mk/config.mk...done. 00:02:04.267 Creating mk/cc.flags.mk...done. 00:02:04.267 Type 'make' to build. 00:02:04.267 17:00:55 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:04.267 17:00:55 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:04.267 17:00:55 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:04.267 17:00:55 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.267 ************************************ 00:02:04.267 START TEST make 00:02:04.267 ************************************ 00:02:04.267 17:00:55 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:04.267 make[1]: Nothing to be done for 'all'. 00:02:19.172 The Meson build system 00:02:19.172 Version: 1.5.0 00:02:19.172 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:19.172 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:19.172 Build type: native build 00:02:19.172 Program cat found: YES (/usr/bin/cat) 00:02:19.172 Project name: DPDK 00:02:19.172 Project version: 24.03.0 00:02:19.172 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:19.172 C linker for the host machine: cc ld.bfd 2.40-14 00:02:19.172 Host machine cpu family: x86_64 00:02:19.172 Host machine cpu: x86_64 00:02:19.172 Message: ## Building in Developer Mode ## 00:02:19.172 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:19.172 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:19.172 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:19.172 Program python3 found: YES (/usr/bin/python3) 00:02:19.172 Program cat found: YES (/usr/bin/cat) 00:02:19.172 Compiler for C supports arguments -march=native: YES 00:02:19.172 Checking for size of "void *" : 8 00:02:19.172 Checking for size of "void *" : 8 (cached) 00:02:19.172 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:19.172 Library m found: YES 00:02:19.172 Library numa found: YES 00:02:19.172 Has header "numaif.h" : YES 00:02:19.172 Library fdt found: NO 00:02:19.172 Library execinfo found: NO 00:02:19.172 Has header "execinfo.h" : YES 00:02:19.172 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:19.172 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:19.172 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:19.172 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:19.172 Run-time dependency openssl found: YES 3.1.1 00:02:19.172 Run-time dependency libpcap found: YES 1.10.4 00:02:19.172 Has header "pcap.h" with dependency libpcap: YES 00:02:19.172 Compiler for C supports arguments -Wcast-qual: YES 00:02:19.172 Compiler for C supports arguments -Wdeprecated: YES 00:02:19.172 Compiler for C supports arguments -Wformat: YES 00:02:19.172 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:19.172 Compiler for C supports arguments -Wformat-security: NO 00:02:19.172 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:19.172 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:19.172 Compiler for C supports arguments -Wnested-externs: YES 00:02:19.172 Compiler for C supports arguments -Wold-style-definition: YES 00:02:19.172 Compiler for C supports arguments -Wpointer-arith: YES 00:02:19.172 Compiler for C supports arguments -Wsign-compare: YES 00:02:19.172 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:19.172 Compiler for C supports arguments -Wundef: YES 00:02:19.172 Compiler for C supports arguments -Wwrite-strings: YES 00:02:19.172 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:19.172 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:19.172 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:19.172 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:19.172 Program objdump found: YES (/usr/bin/objdump) 00:02:19.172 Compiler for C supports arguments -mavx512f: YES 00:02:19.172 Checking if "AVX512 checking" compiles: YES 00:02:19.172 Fetching value of define "__SSE4_2__" : 1 00:02:19.172 Fetching value of define "__AES__" : 1 00:02:19.172 Fetching value of define "__AVX__" : 1 00:02:19.172 Fetching value of define "__AVX2__" : 1 00:02:19.172 Fetching value of define "__AVX512BW__" : (undefined) 00:02:19.172 Fetching value of define "__AVX512CD__" : (undefined) 00:02:19.172 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:19.172 Fetching value of define "__AVX512F__" : (undefined) 00:02:19.172 Fetching value of define "__AVX512VL__" : (undefined) 00:02:19.172 Fetching value of define "__PCLMUL__" : 1 00:02:19.172 Fetching value of define "__RDRND__" : 1 00:02:19.172 Fetching value of define "__RDSEED__" : 1 00:02:19.172 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:19.172 Fetching value of define "__znver1__" : (undefined) 00:02:19.172 Fetching value of define "__znver2__" : (undefined) 00:02:19.172 Fetching value of define "__znver3__" : (undefined) 00:02:19.172 Fetching value of define "__znver4__" : (undefined) 00:02:19.172 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:19.172 Message: lib/log: Defining dependency "log" 00:02:19.172 Message: lib/kvargs: Defining dependency "kvargs" 00:02:19.172 Message: lib/telemetry: Defining dependency "telemetry" 00:02:19.172 Checking for function "getentropy" : NO 00:02:19.172 Message: lib/eal: Defining dependency "eal" 00:02:19.172 Message: lib/ring: Defining dependency "ring" 00:02:19.172 Message: lib/rcu: Defining dependency "rcu" 00:02:19.172 Message: lib/mempool: Defining dependency "mempool" 00:02:19.172 Message: lib/mbuf: Defining dependency "mbuf" 00:02:19.172 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:19.172 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:19.172 Compiler for C supports arguments -mpclmul: YES 00:02:19.172 Compiler for C supports arguments -maes: YES 00:02:19.172 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:19.172 Compiler for C supports arguments -mavx512bw: YES 00:02:19.172 Compiler for C supports arguments -mavx512dq: YES 00:02:19.172 Compiler for C supports arguments -mavx512vl: YES 00:02:19.172 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:19.172 Compiler for C supports arguments -mavx2: YES 00:02:19.172 Compiler for C supports arguments -mavx: YES 00:02:19.172 Message: lib/net: Defining dependency "net" 00:02:19.172 Message: lib/meter: Defining dependency "meter" 00:02:19.172 Message: lib/ethdev: Defining dependency "ethdev" 00:02:19.172 Message: lib/pci: Defining dependency "pci" 00:02:19.172 Message: lib/cmdline: Defining dependency "cmdline" 00:02:19.172 Message: lib/hash: Defining dependency "hash" 00:02:19.173 Message: lib/timer: Defining dependency "timer" 00:02:19.173 Message: lib/compressdev: Defining dependency "compressdev" 00:02:19.173 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:19.173 Message: lib/dmadev: Defining dependency "dmadev" 00:02:19.173 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:19.173 Message: lib/power: Defining dependency "power" 00:02:19.173 Message: lib/reorder: Defining dependency "reorder" 00:02:19.173 Message: lib/security: Defining dependency "security" 00:02:19.173 Has header "linux/userfaultfd.h" : YES 00:02:19.173 Has header "linux/vduse.h" : YES 00:02:19.173 Message: lib/vhost: Defining dependency "vhost" 00:02:19.173 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:19.173 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:19.173 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:19.173 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:19.173 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:19.173 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:19.173 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:19.173 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:19.173 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:19.173 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:19.173 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:19.173 Configuring doxy-api-html.conf using configuration 00:02:19.173 Configuring doxy-api-man.conf using configuration 00:02:19.173 Program mandb found: YES (/usr/bin/mandb) 00:02:19.173 Program sphinx-build found: NO 00:02:19.173 Configuring rte_build_config.h using configuration 00:02:19.173 Message: 00:02:19.173 ================= 00:02:19.173 Applications Enabled 00:02:19.173 ================= 00:02:19.173 00:02:19.173 apps: 00:02:19.173 00:02:19.173 00:02:19.173 Message: 00:02:19.173 ================= 00:02:19.173 Libraries Enabled 00:02:19.173 ================= 00:02:19.173 00:02:19.173 libs: 00:02:19.173 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:19.173 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:19.173 cryptodev, dmadev, power, reorder, security, vhost, 00:02:19.173 00:02:19.173 Message: 00:02:19.173 =============== 00:02:19.173 Drivers Enabled 00:02:19.173 =============== 00:02:19.173 00:02:19.173 common: 00:02:19.173 00:02:19.173 bus: 00:02:19.173 pci, vdev, 00:02:19.173 mempool: 00:02:19.173 ring, 00:02:19.173 dma: 00:02:19.173 00:02:19.173 net: 00:02:19.173 00:02:19.173 crypto: 00:02:19.173 00:02:19.173 compress: 00:02:19.173 00:02:19.173 vdpa: 00:02:19.173 00:02:19.173 00:02:19.173 Message: 00:02:19.173 ================= 00:02:19.173 Content Skipped 00:02:19.173 ================= 00:02:19.173 00:02:19.173 apps: 00:02:19.173 dumpcap: explicitly disabled via build config 00:02:19.173 graph: explicitly disabled via build config 00:02:19.173 pdump: explicitly disabled via build config 00:02:19.173 proc-info: explicitly disabled via build config 00:02:19.173 test-acl: explicitly disabled via build config 00:02:19.173 test-bbdev: explicitly disabled via build config 00:02:19.173 test-cmdline: explicitly disabled via build config 00:02:19.173 test-compress-perf: explicitly disabled via build config 00:02:19.173 test-crypto-perf: explicitly disabled via build config 00:02:19.173 test-dma-perf: explicitly disabled via build config 00:02:19.173 test-eventdev: explicitly disabled via build config 00:02:19.173 test-fib: explicitly disabled via build config 00:02:19.173 test-flow-perf: explicitly disabled via build config 00:02:19.173 test-gpudev: explicitly disabled via build config 00:02:19.173 test-mldev: explicitly disabled via build config 00:02:19.173 test-pipeline: explicitly disabled via build config 00:02:19.173 test-pmd: explicitly disabled via build config 00:02:19.173 test-regex: explicitly disabled via build config 00:02:19.173 test-sad: explicitly disabled via build config 00:02:19.173 test-security-perf: explicitly disabled via build config 00:02:19.173 00:02:19.173 libs: 00:02:19.173 argparse: explicitly disabled via build config 00:02:19.173 metrics: explicitly disabled via build config 00:02:19.173 acl: explicitly disabled via build config 00:02:19.173 bbdev: explicitly disabled via build config 00:02:19.173 bitratestats: explicitly disabled via build config 00:02:19.173 bpf: explicitly disabled via build config 00:02:19.173 cfgfile: explicitly disabled via build config 00:02:19.173 distributor: explicitly disabled via build config 00:02:19.173 efd: explicitly disabled via build config 00:02:19.173 eventdev: explicitly disabled via build config 00:02:19.173 dispatcher: explicitly disabled via build config 00:02:19.173 gpudev: explicitly disabled via build config 00:02:19.173 gro: explicitly disabled via build config 00:02:19.173 gso: explicitly disabled via build config 00:02:19.173 ip_frag: explicitly disabled via build config 00:02:19.173 jobstats: explicitly disabled via build config 00:02:19.173 latencystats: explicitly disabled via build config 00:02:19.173 lpm: explicitly disabled via build config 00:02:19.173 member: explicitly disabled via build config 00:02:19.173 pcapng: explicitly disabled via build config 00:02:19.173 rawdev: explicitly disabled via build config 00:02:19.173 regexdev: explicitly disabled via build config 00:02:19.173 mldev: explicitly disabled via build config 00:02:19.173 rib: explicitly disabled via build config 00:02:19.173 sched: explicitly disabled via build config 00:02:19.173 stack: explicitly disabled via build config 00:02:19.173 ipsec: explicitly disabled via build config 00:02:19.173 pdcp: explicitly disabled via build config 00:02:19.173 fib: explicitly disabled via build config 00:02:19.173 port: explicitly disabled via build config 00:02:19.173 pdump: explicitly disabled via build config 00:02:19.173 table: explicitly disabled via build config 00:02:19.173 pipeline: explicitly disabled via build config 00:02:19.173 graph: explicitly disabled via build config 00:02:19.173 node: explicitly disabled via build config 00:02:19.173 00:02:19.173 drivers: 00:02:19.173 common/cpt: not in enabled drivers build config 00:02:19.173 common/dpaax: not in enabled drivers build config 00:02:19.173 common/iavf: not in enabled drivers build config 00:02:19.173 common/idpf: not in enabled drivers build config 00:02:19.173 common/ionic: not in enabled drivers build config 00:02:19.173 common/mvep: not in enabled drivers build config 00:02:19.173 common/octeontx: not in enabled drivers build config 00:02:19.173 bus/auxiliary: not in enabled drivers build config 00:02:19.173 bus/cdx: not in enabled drivers build config 00:02:19.173 bus/dpaa: not in enabled drivers build config 00:02:19.173 bus/fslmc: not in enabled drivers build config 00:02:19.173 bus/ifpga: not in enabled drivers build config 00:02:19.173 bus/platform: not in enabled drivers build config 00:02:19.173 bus/uacce: not in enabled drivers build config 00:02:19.173 bus/vmbus: not in enabled drivers build config 00:02:19.173 common/cnxk: not in enabled drivers build config 00:02:19.173 common/mlx5: not in enabled drivers build config 00:02:19.173 common/nfp: not in enabled drivers build config 00:02:19.173 common/nitrox: not in enabled drivers build config 00:02:19.173 common/qat: not in enabled drivers build config 00:02:19.173 common/sfc_efx: not in enabled drivers build config 00:02:19.173 mempool/bucket: not in enabled drivers build config 00:02:19.173 mempool/cnxk: not in enabled drivers build config 00:02:19.173 mempool/dpaa: not in enabled drivers build config 00:02:19.173 mempool/dpaa2: not in enabled drivers build config 00:02:19.173 mempool/octeontx: not in enabled drivers build config 00:02:19.173 mempool/stack: not in enabled drivers build config 00:02:19.173 dma/cnxk: not in enabled drivers build config 00:02:19.173 dma/dpaa: not in enabled drivers build config 00:02:19.173 dma/dpaa2: not in enabled drivers build config 00:02:19.173 dma/hisilicon: not in enabled drivers build config 00:02:19.173 dma/idxd: not in enabled drivers build config 00:02:19.173 dma/ioat: not in enabled drivers build config 00:02:19.173 dma/skeleton: not in enabled drivers build config 00:02:19.173 net/af_packet: not in enabled drivers build config 00:02:19.173 net/af_xdp: not in enabled drivers build config 00:02:19.173 net/ark: not in enabled drivers build config 00:02:19.173 net/atlantic: not in enabled drivers build config 00:02:19.173 net/avp: not in enabled drivers build config 00:02:19.173 net/axgbe: not in enabled drivers build config 00:02:19.173 net/bnx2x: not in enabled drivers build config 00:02:19.174 net/bnxt: not in enabled drivers build config 00:02:19.174 net/bonding: not in enabled drivers build config 00:02:19.174 net/cnxk: not in enabled drivers build config 00:02:19.174 net/cpfl: not in enabled drivers build config 00:02:19.174 net/cxgbe: not in enabled drivers build config 00:02:19.174 net/dpaa: not in enabled drivers build config 00:02:19.174 net/dpaa2: not in enabled drivers build config 00:02:19.174 net/e1000: not in enabled drivers build config 00:02:19.174 net/ena: not in enabled drivers build config 00:02:19.174 net/enetc: not in enabled drivers build config 00:02:19.174 net/enetfec: not in enabled drivers build config 00:02:19.174 net/enic: not in enabled drivers build config 00:02:19.174 net/failsafe: not in enabled drivers build config 00:02:19.174 net/fm10k: not in enabled drivers build config 00:02:19.174 net/gve: not in enabled drivers build config 00:02:19.174 net/hinic: not in enabled drivers build config 00:02:19.174 net/hns3: not in enabled drivers build config 00:02:19.174 net/i40e: not in enabled drivers build config 00:02:19.174 net/iavf: not in enabled drivers build config 00:02:19.174 net/ice: not in enabled drivers build config 00:02:19.174 net/idpf: not in enabled drivers build config 00:02:19.174 net/igc: not in enabled drivers build config 00:02:19.174 net/ionic: not in enabled drivers build config 00:02:19.174 net/ipn3ke: not in enabled drivers build config 00:02:19.174 net/ixgbe: not in enabled drivers build config 00:02:19.174 net/mana: not in enabled drivers build config 00:02:19.174 net/memif: not in enabled drivers build config 00:02:19.174 net/mlx4: not in enabled drivers build config 00:02:19.174 net/mlx5: not in enabled drivers build config 00:02:19.174 net/mvneta: not in enabled drivers build config 00:02:19.174 net/mvpp2: not in enabled drivers build config 00:02:19.174 net/netvsc: not in enabled drivers build config 00:02:19.174 net/nfb: not in enabled drivers build config 00:02:19.174 net/nfp: not in enabled drivers build config 00:02:19.174 net/ngbe: not in enabled drivers build config 00:02:19.174 net/null: not in enabled drivers build config 00:02:19.174 net/octeontx: not in enabled drivers build config 00:02:19.174 net/octeon_ep: not in enabled drivers build config 00:02:19.174 net/pcap: not in enabled drivers build config 00:02:19.174 net/pfe: not in enabled drivers build config 00:02:19.174 net/qede: not in enabled drivers build config 00:02:19.174 net/ring: not in enabled drivers build config 00:02:19.174 net/sfc: not in enabled drivers build config 00:02:19.174 net/softnic: not in enabled drivers build config 00:02:19.174 net/tap: not in enabled drivers build config 00:02:19.174 net/thunderx: not in enabled drivers build config 00:02:19.174 net/txgbe: not in enabled drivers build config 00:02:19.174 net/vdev_netvsc: not in enabled drivers build config 00:02:19.174 net/vhost: not in enabled drivers build config 00:02:19.174 net/virtio: not in enabled drivers build config 00:02:19.174 net/vmxnet3: not in enabled drivers build config 00:02:19.174 raw/*: missing internal dependency, "rawdev" 00:02:19.174 crypto/armv8: not in enabled drivers build config 00:02:19.174 crypto/bcmfs: not in enabled drivers build config 00:02:19.174 crypto/caam_jr: not in enabled drivers build config 00:02:19.174 crypto/ccp: not in enabled drivers build config 00:02:19.174 crypto/cnxk: not in enabled drivers build config 00:02:19.174 crypto/dpaa_sec: not in enabled drivers build config 00:02:19.174 crypto/dpaa2_sec: not in enabled drivers build config 00:02:19.174 crypto/ipsec_mb: not in enabled drivers build config 00:02:19.174 crypto/mlx5: not in enabled drivers build config 00:02:19.174 crypto/mvsam: not in enabled drivers build config 00:02:19.174 crypto/nitrox: not in enabled drivers build config 00:02:19.174 crypto/null: not in enabled drivers build config 00:02:19.174 crypto/octeontx: not in enabled drivers build config 00:02:19.174 crypto/openssl: not in enabled drivers build config 00:02:19.174 crypto/scheduler: not in enabled drivers build config 00:02:19.174 crypto/uadk: not in enabled drivers build config 00:02:19.174 crypto/virtio: not in enabled drivers build config 00:02:19.174 compress/isal: not in enabled drivers build config 00:02:19.174 compress/mlx5: not in enabled drivers build config 00:02:19.174 compress/nitrox: not in enabled drivers build config 00:02:19.174 compress/octeontx: not in enabled drivers build config 00:02:19.174 compress/zlib: not in enabled drivers build config 00:02:19.174 regex/*: missing internal dependency, "regexdev" 00:02:19.174 ml/*: missing internal dependency, "mldev" 00:02:19.174 vdpa/ifc: not in enabled drivers build config 00:02:19.174 vdpa/mlx5: not in enabled drivers build config 00:02:19.174 vdpa/nfp: not in enabled drivers build config 00:02:19.174 vdpa/sfc: not in enabled drivers build config 00:02:19.174 event/*: missing internal dependency, "eventdev" 00:02:19.174 baseband/*: missing internal dependency, "bbdev" 00:02:19.174 gpu/*: missing internal dependency, "gpudev" 00:02:19.174 00:02:19.174 00:02:19.174 Build targets in project: 85 00:02:19.174 00:02:19.174 DPDK 24.03.0 00:02:19.174 00:02:19.174 User defined options 00:02:19.174 buildtype : debug 00:02:19.174 default_library : shared 00:02:19.174 libdir : lib 00:02:19.174 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:19.174 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:19.174 c_link_args : 00:02:19.174 cpu_instruction_set: native 00:02:19.174 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:19.174 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:19.174 enable_docs : false 00:02:19.174 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:19.174 enable_kmods : false 00:02:19.174 max_lcores : 128 00:02:19.174 tests : false 00:02:19.174 00:02:19.174 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:19.174 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:19.174 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:19.174 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:19.174 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:19.174 [4/268] Linking static target lib/librte_log.a 00:02:19.174 [5/268] Linking static target lib/librte_kvargs.a 00:02:19.174 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:19.174 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.174 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:19.174 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:19.174 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:19.174 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:19.174 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:19.174 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:19.174 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:19.174 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:19.174 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:19.174 [17/268] Linking static target lib/librte_telemetry.a 00:02:19.174 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.174 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:19.174 [20/268] Linking target lib/librte_log.so.24.1 00:02:19.174 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:19.174 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:19.174 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:19.434 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:19.434 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:19.434 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:19.434 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:19.434 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:19.694 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:19.694 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:19.694 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:19.694 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.694 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:19.694 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:19.952 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:19.952 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:19.952 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:20.209 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:20.209 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:20.467 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:20.467 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:20.467 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:20.467 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:20.467 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:20.467 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:20.725 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:20.725 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:20.725 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:20.725 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:20.984 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:20.984 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:20.984 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:21.242 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:21.500 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:21.500 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:21.500 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:21.500 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:21.500 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:21.766 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:21.766 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:21.766 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:22.024 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:22.024 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:22.283 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:22.283 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:22.283 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:22.541 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:22.541 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:22.799 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:22.799 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:22.799 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:22.799 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:22.799 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:22.799 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:22.799 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:23.057 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:23.057 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:23.057 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:23.057 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:23.315 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:23.315 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:23.315 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:23.572 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:23.572 [84/268] Linking static target lib/librte_ring.a 00:02:23.572 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:23.572 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:23.572 [87/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:23.572 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:23.572 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:23.572 [90/268] Linking static target lib/librte_rcu.a 00:02:23.572 [91/268] Linking static target lib/librte_eal.a 00:02:24.137 [92/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.137 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:24.137 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:24.137 [95/268] Linking static target lib/librte_mempool.a 00:02:24.137 [96/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.137 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:24.137 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:24.137 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:24.137 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:24.137 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:24.457 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:24.457 [103/268] Linking static target lib/librte_mbuf.a 00:02:24.729 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:24.729 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:24.729 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:24.729 [107/268] Linking static target lib/librte_meter.a 00:02:24.729 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:24.986 [109/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:24.986 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:24.986 [111/268] Linking static target lib/librte_net.a 00:02:24.986 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:25.244 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.501 [114/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.501 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:25.501 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.501 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:25.501 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.759 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:26.016 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:26.016 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:26.274 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:26.531 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:26.531 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:26.531 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:26.532 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:26.532 [127/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:26.532 [128/268] Linking static target lib/librte_pci.a 00:02:26.790 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:26.790 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:26.790 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:26.790 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:26.790 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:26.790 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:26.790 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:27.048 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:27.048 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:27.048 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:27.048 [139/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.048 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:27.048 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:27.048 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:27.048 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:27.048 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:27.048 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:27.306 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:27.306 [147/268] Linking static target lib/librte_ethdev.a 00:02:27.306 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:27.564 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:27.564 [150/268] Linking static target lib/librte_cmdline.a 00:02:27.564 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:27.822 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:27.822 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:27.822 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:27.822 [155/268] Linking static target lib/librte_timer.a 00:02:27.822 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:28.080 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:28.080 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:28.080 [159/268] Linking static target lib/librte_hash.a 00:02:28.338 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:28.338 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:28.338 [162/268] Linking static target lib/librte_compressdev.a 00:02:28.596 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:28.596 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.596 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:28.853 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:28.853 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:28.853 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:28.853 [169/268] Linking static target lib/librte_dmadev.a 00:02:29.111 [170/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:29.111 [171/268] Linking static target lib/librte_cryptodev.a 00:02:29.111 [172/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.369 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:29.369 [174/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:29.369 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:29.369 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.369 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.626 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:29.883 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.884 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:29.884 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:29.884 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:29.884 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:30.141 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:30.141 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:30.141 [186/268] Linking static target lib/librte_power.a 00:02:30.399 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:30.399 [188/268] Linking static target lib/librte_reorder.a 00:02:30.657 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:30.657 [190/268] Linking static target lib/librte_security.a 00:02:30.657 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:30.657 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:30.914 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:30.914 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.172 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:31.429 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.429 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.429 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:31.687 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:31.687 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:31.687 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.945 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:31.945 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:32.202 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:32.202 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:32.202 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:32.202 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:32.459 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:32.459 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:32.459 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:32.459 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:32.459 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:32.716 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:32.716 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:32.716 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:32.716 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:32.716 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:32.716 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:32.716 [219/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:32.716 [220/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:32.716 [221/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:32.716 [222/268] Linking static target drivers/librte_bus_pci.a 00:02:32.974 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:32.974 [224/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.974 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:32.974 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:32.974 [227/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:33.232 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.798 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:33.798 [230/268] Linking static target lib/librte_vhost.a 00:02:34.734 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.734 [232/268] Linking target lib/librte_eal.so.24.1 00:02:34.992 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:34.992 [234/268] Linking target lib/librte_meter.so.24.1 00:02:34.992 [235/268] Linking target lib/librte_pci.so.24.1 00:02:34.992 [236/268] Linking target lib/librte_ring.so.24.1 00:02:34.992 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:34.992 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:34.992 [239/268] Linking target lib/librte_timer.so.24.1 00:02:34.992 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:34.992 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:34.992 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:34.992 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:34.992 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:34.992 [245/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.992 [246/268] Linking target lib/librte_rcu.so.24.1 00:02:34.992 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:35.251 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:35.251 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:35.251 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:35.251 [251/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.251 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:35.251 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:35.510 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:35.510 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:35.510 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:02:35.510 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:35.510 [258/268] Linking target lib/librte_net.so.24.1 00:02:35.770 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:35.770 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:35.770 [261/268] Linking target lib/librte_security.so.24.1 00:02:35.770 [262/268] Linking target lib/librte_hash.so.24.1 00:02:35.770 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:35.770 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:35.770 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:35.770 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:36.028 [267/268] Linking target lib/librte_power.so.24.1 00:02:36.028 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:36.028 INFO: autodetecting backend as ninja 00:02:36.028 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:02.568 CC lib/ut_mock/mock.o 00:03:02.568 CC lib/log/log_flags.o 00:03:02.568 CC lib/log/log.o 00:03:02.568 CC lib/ut/ut.o 00:03:02.568 CC lib/log/log_deprecated.o 00:03:02.568 LIB libspdk_ut_mock.a 00:03:02.568 LIB libspdk_ut.a 00:03:02.568 LIB libspdk_log.a 00:03:02.568 SO libspdk_ut_mock.so.6.0 00:03:02.568 SO libspdk_ut.so.2.0 00:03:02.568 SO libspdk_log.so.7.1 00:03:02.568 SYMLINK libspdk_ut_mock.so 00:03:02.568 SYMLINK libspdk_ut.so 00:03:02.568 SYMLINK libspdk_log.so 00:03:02.568 CXX lib/trace_parser/trace.o 00:03:02.568 CC lib/dma/dma.o 00:03:02.568 CC lib/util/base64.o 00:03:02.568 CC lib/util/bit_array.o 00:03:02.568 CC lib/util/cpuset.o 00:03:02.568 CC lib/util/crc16.o 00:03:02.568 CC lib/ioat/ioat.o 00:03:02.568 CC lib/util/crc32.o 00:03:02.568 CC lib/util/crc32c.o 00:03:02.568 CC lib/vfio_user/host/vfio_user_pci.o 00:03:02.568 CC lib/util/crc32_ieee.o 00:03:02.568 CC lib/util/crc64.o 00:03:02.568 CC lib/vfio_user/host/vfio_user.o 00:03:02.568 CC lib/util/dif.o 00:03:02.568 CC lib/util/fd.o 00:03:02.568 LIB libspdk_dma.a 00:03:02.568 CC lib/util/fd_group.o 00:03:02.568 SO libspdk_dma.so.5.0 00:03:02.568 LIB libspdk_ioat.a 00:03:02.568 CC lib/util/file.o 00:03:02.568 SYMLINK libspdk_dma.so 00:03:02.568 CC lib/util/hexlify.o 00:03:02.568 CC lib/util/iov.o 00:03:02.568 SO libspdk_ioat.so.7.0 00:03:02.568 CC lib/util/math.o 00:03:02.568 CC lib/util/net.o 00:03:02.568 SYMLINK libspdk_ioat.so 00:03:02.568 CC lib/util/pipe.o 00:03:02.568 LIB libspdk_vfio_user.a 00:03:02.568 SO libspdk_vfio_user.so.5.0 00:03:02.568 CC lib/util/strerror_tls.o 00:03:02.568 CC lib/util/string.o 00:03:02.568 SYMLINK libspdk_vfio_user.so 00:03:02.568 CC lib/util/uuid.o 00:03:02.568 CC lib/util/xor.o 00:03:02.568 CC lib/util/zipf.o 00:03:02.568 CC lib/util/md5.o 00:03:02.568 LIB libspdk_util.a 00:03:02.568 SO libspdk_util.so.10.1 00:03:02.568 LIB libspdk_trace_parser.a 00:03:02.568 SYMLINK libspdk_util.so 00:03:02.568 SO libspdk_trace_parser.so.6.0 00:03:02.568 SYMLINK libspdk_trace_parser.so 00:03:02.568 CC lib/rdma_utils/rdma_utils.o 00:03:02.568 CC lib/env_dpdk/env.o 00:03:02.568 CC lib/env_dpdk/pci.o 00:03:02.568 CC lib/env_dpdk/memory.o 00:03:02.568 CC lib/env_dpdk/init.o 00:03:02.568 CC lib/env_dpdk/threads.o 00:03:02.568 CC lib/idxd/idxd.o 00:03:02.568 CC lib/vmd/vmd.o 00:03:02.568 CC lib/conf/conf.o 00:03:02.568 CC lib/json/json_parse.o 00:03:02.568 CC lib/env_dpdk/pci_ioat.o 00:03:02.568 LIB libspdk_conf.a 00:03:02.568 SO libspdk_conf.so.6.0 00:03:02.568 LIB libspdk_rdma_utils.a 00:03:02.568 CC lib/json/json_util.o 00:03:02.568 SO libspdk_rdma_utils.so.1.0 00:03:02.568 CC lib/env_dpdk/pci_virtio.o 00:03:02.568 SYMLINK libspdk_conf.so 00:03:02.568 CC lib/env_dpdk/pci_vmd.o 00:03:02.568 SYMLINK libspdk_rdma_utils.so 00:03:02.568 CC lib/json/json_write.o 00:03:02.568 CC lib/env_dpdk/pci_idxd.o 00:03:02.568 CC lib/env_dpdk/pci_event.o 00:03:02.568 CC lib/vmd/led.o 00:03:02.568 CC lib/env_dpdk/sigbus_handler.o 00:03:02.568 CC lib/env_dpdk/pci_dpdk.o 00:03:02.568 CC lib/rdma_provider/common.o 00:03:02.568 CC lib/idxd/idxd_user.o 00:03:02.568 CC lib/idxd/idxd_kernel.o 00:03:02.568 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:02.568 LIB libspdk_vmd.a 00:03:02.568 LIB libspdk_json.a 00:03:02.568 SO libspdk_vmd.so.6.0 00:03:02.568 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:02.568 SO libspdk_json.so.6.0 00:03:02.568 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:02.568 SYMLINK libspdk_vmd.so 00:03:02.568 SYMLINK libspdk_json.so 00:03:02.568 LIB libspdk_idxd.a 00:03:02.568 SO libspdk_idxd.so.12.1 00:03:02.568 SYMLINK libspdk_idxd.so 00:03:02.568 LIB libspdk_rdma_provider.a 00:03:02.568 CC lib/jsonrpc/jsonrpc_server.o 00:03:02.568 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:02.568 CC lib/jsonrpc/jsonrpc_client.o 00:03:02.568 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:02.568 SO libspdk_rdma_provider.so.7.0 00:03:02.568 SYMLINK libspdk_rdma_provider.so 00:03:02.568 LIB libspdk_jsonrpc.a 00:03:02.568 SO libspdk_jsonrpc.so.6.0 00:03:02.568 SYMLINK libspdk_jsonrpc.so 00:03:02.568 LIB libspdk_env_dpdk.a 00:03:02.568 CC lib/rpc/rpc.o 00:03:02.568 SO libspdk_env_dpdk.so.15.1 00:03:02.826 SYMLINK libspdk_env_dpdk.so 00:03:02.826 LIB libspdk_rpc.a 00:03:02.826 SO libspdk_rpc.so.6.0 00:03:02.826 SYMLINK libspdk_rpc.so 00:03:03.084 CC lib/keyring/keyring.o 00:03:03.084 CC lib/keyring/keyring_rpc.o 00:03:03.084 CC lib/notify/notify.o 00:03:03.084 CC lib/notify/notify_rpc.o 00:03:03.084 CC lib/trace/trace_flags.o 00:03:03.084 CC lib/trace/trace.o 00:03:03.084 CC lib/trace/trace_rpc.o 00:03:03.342 LIB libspdk_notify.a 00:03:03.342 LIB libspdk_keyring.a 00:03:03.342 SO libspdk_notify.so.6.0 00:03:03.342 SO libspdk_keyring.so.2.0 00:03:03.342 SYMLINK libspdk_notify.so 00:03:03.342 LIB libspdk_trace.a 00:03:03.342 SYMLINK libspdk_keyring.so 00:03:03.600 SO libspdk_trace.so.11.0 00:03:03.600 SYMLINK libspdk_trace.so 00:03:03.858 CC lib/sock/sock_rpc.o 00:03:03.858 CC lib/sock/sock.o 00:03:03.858 CC lib/thread/thread.o 00:03:03.858 CC lib/thread/iobuf.o 00:03:04.424 LIB libspdk_sock.a 00:03:04.424 SO libspdk_sock.so.10.0 00:03:04.425 SYMLINK libspdk_sock.so 00:03:04.682 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:04.682 CC lib/nvme/nvme_ctrlr.o 00:03:04.682 CC lib/nvme/nvme_fabric.o 00:03:04.682 CC lib/nvme/nvme_ns_cmd.o 00:03:04.682 CC lib/nvme/nvme_ns.o 00:03:04.682 CC lib/nvme/nvme_pcie_common.o 00:03:04.682 CC lib/nvme/nvme.o 00:03:04.682 CC lib/nvme/nvme_qpair.o 00:03:04.682 CC lib/nvme/nvme_pcie.o 00:03:05.616 LIB libspdk_thread.a 00:03:05.616 CC lib/nvme/nvme_quirks.o 00:03:05.616 SO libspdk_thread.so.11.0 00:03:05.616 CC lib/nvme/nvme_transport.o 00:03:05.616 CC lib/nvme/nvme_discovery.o 00:03:05.616 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:05.616 SYMLINK libspdk_thread.so 00:03:05.616 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:05.616 CC lib/nvme/nvme_tcp.o 00:03:05.616 CC lib/nvme/nvme_opal.o 00:03:05.873 CC lib/nvme/nvme_io_msg.o 00:03:05.873 CC lib/accel/accel.o 00:03:06.131 CC lib/accel/accel_rpc.o 00:03:06.131 CC lib/accel/accel_sw.o 00:03:06.389 CC lib/blob/blobstore.o 00:03:06.389 CC lib/init/json_config.o 00:03:06.389 CC lib/virtio/virtio.o 00:03:06.389 CC lib/virtio/virtio_vhost_user.o 00:03:06.389 CC lib/virtio/virtio_vfio_user.o 00:03:06.647 CC lib/virtio/virtio_pci.o 00:03:06.647 CC lib/init/subsystem.o 00:03:06.647 CC lib/nvme/nvme_poll_group.o 00:03:06.647 CC lib/nvme/nvme_zns.o 00:03:06.647 CC lib/blob/request.o 00:03:06.647 CC lib/blob/zeroes.o 00:03:06.904 CC lib/init/subsystem_rpc.o 00:03:06.904 LIB libspdk_virtio.a 00:03:06.904 SO libspdk_virtio.so.7.0 00:03:06.904 CC lib/blob/blob_bs_dev.o 00:03:06.904 CC lib/init/rpc.o 00:03:06.904 SYMLINK libspdk_virtio.so 00:03:06.904 LIB libspdk_accel.a 00:03:07.162 CC lib/nvme/nvme_stubs.o 00:03:07.162 CC lib/nvme/nvme_auth.o 00:03:07.162 SO libspdk_accel.so.16.0 00:03:07.162 LIB libspdk_init.a 00:03:07.162 SYMLINK libspdk_accel.so 00:03:07.162 CC lib/nvme/nvme_cuse.o 00:03:07.162 SO libspdk_init.so.6.0 00:03:07.162 CC lib/nvme/nvme_rdma.o 00:03:07.162 CC lib/fsdev/fsdev.o 00:03:07.162 SYMLINK libspdk_init.so 00:03:07.162 CC lib/fsdev/fsdev_io.o 00:03:07.420 CC lib/fsdev/fsdev_rpc.o 00:03:07.421 CC lib/bdev/bdev.o 00:03:07.421 CC lib/event/app.o 00:03:07.421 CC lib/event/reactor.o 00:03:07.678 CC lib/event/log_rpc.o 00:03:07.678 CC lib/bdev/bdev_rpc.o 00:03:07.936 CC lib/event/app_rpc.o 00:03:07.936 LIB libspdk_fsdev.a 00:03:07.936 CC lib/event/scheduler_static.o 00:03:07.936 SO libspdk_fsdev.so.2.0 00:03:07.936 CC lib/bdev/bdev_zone.o 00:03:07.936 CC lib/bdev/part.o 00:03:08.195 SYMLINK libspdk_fsdev.so 00:03:08.195 CC lib/bdev/scsi_nvme.o 00:03:08.195 LIB libspdk_event.a 00:03:08.195 SO libspdk_event.so.14.0 00:03:08.195 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:08.454 SYMLINK libspdk_event.so 00:03:08.454 LIB libspdk_nvme.a 00:03:08.711 SO libspdk_nvme.so.15.0 00:03:08.969 LIB libspdk_fuse_dispatcher.a 00:03:08.969 SO libspdk_fuse_dispatcher.so.1.0 00:03:08.969 SYMLINK libspdk_fuse_dispatcher.so 00:03:08.969 SYMLINK libspdk_nvme.so 00:03:09.535 LIB libspdk_blob.a 00:03:09.535 SO libspdk_blob.so.11.0 00:03:09.535 SYMLINK libspdk_blob.so 00:03:09.794 CC lib/blobfs/tree.o 00:03:09.794 CC lib/blobfs/blobfs.o 00:03:09.794 CC lib/lvol/lvol.o 00:03:10.361 LIB libspdk_bdev.a 00:03:10.361 SO libspdk_bdev.so.17.0 00:03:10.361 SYMLINK libspdk_bdev.so 00:03:10.619 CC lib/ublk/ublk.o 00:03:10.619 CC lib/ublk/ublk_rpc.o 00:03:10.619 CC lib/scsi/dev.o 00:03:10.619 CC lib/scsi/lun.o 00:03:10.619 CC lib/scsi/port.o 00:03:10.619 CC lib/nbd/nbd.o 00:03:10.619 CC lib/ftl/ftl_core.o 00:03:10.619 CC lib/nvmf/ctrlr.o 00:03:10.876 LIB libspdk_blobfs.a 00:03:10.876 SO libspdk_blobfs.so.10.0 00:03:10.876 CC lib/scsi/scsi.o 00:03:10.876 SYMLINK libspdk_blobfs.so 00:03:10.876 CC lib/nbd/nbd_rpc.o 00:03:10.876 CC lib/nvmf/ctrlr_discovery.o 00:03:10.877 CC lib/scsi/scsi_bdev.o 00:03:10.877 LIB libspdk_lvol.a 00:03:11.158 SO libspdk_lvol.so.10.0 00:03:11.158 CC lib/ftl/ftl_init.o 00:03:11.158 CC lib/nvmf/ctrlr_bdev.o 00:03:11.158 SYMLINK libspdk_lvol.so 00:03:11.158 CC lib/scsi/scsi_pr.o 00:03:11.158 CC lib/scsi/scsi_rpc.o 00:03:11.158 CC lib/scsi/task.o 00:03:11.158 LIB libspdk_nbd.a 00:03:11.158 CC lib/nvmf/subsystem.o 00:03:11.158 CC lib/ftl/ftl_layout.o 00:03:11.432 SO libspdk_nbd.so.7.0 00:03:11.432 LIB libspdk_ublk.a 00:03:11.432 SYMLINK libspdk_nbd.so 00:03:11.432 CC lib/ftl/ftl_debug.o 00:03:11.432 SO libspdk_ublk.so.3.0 00:03:11.432 CC lib/nvmf/nvmf.o 00:03:11.432 CC lib/nvmf/nvmf_rpc.o 00:03:11.432 CC lib/nvmf/transport.o 00:03:11.432 SYMLINK libspdk_ublk.so 00:03:11.432 LIB libspdk_scsi.a 00:03:11.432 CC lib/nvmf/tcp.o 00:03:11.432 SO libspdk_scsi.so.9.0 00:03:11.690 SYMLINK libspdk_scsi.so 00:03:11.690 CC lib/ftl/ftl_io.o 00:03:11.690 CC lib/ftl/ftl_sb.o 00:03:11.690 CC lib/iscsi/conn.o 00:03:11.690 CC lib/vhost/vhost.o 00:03:11.948 CC lib/iscsi/init_grp.o 00:03:11.948 CC lib/ftl/ftl_l2p.o 00:03:12.205 CC lib/nvmf/stubs.o 00:03:12.205 CC lib/ftl/ftl_l2p_flat.o 00:03:12.205 CC lib/nvmf/mdns_server.o 00:03:12.205 CC lib/vhost/vhost_rpc.o 00:03:12.205 CC lib/vhost/vhost_scsi.o 00:03:12.463 CC lib/vhost/vhost_blk.o 00:03:12.463 CC lib/iscsi/iscsi.o 00:03:12.463 CC lib/ftl/ftl_nv_cache.o 00:03:12.463 CC lib/ftl/ftl_band.o 00:03:12.720 CC lib/ftl/ftl_band_ops.o 00:03:12.720 CC lib/ftl/ftl_writer.o 00:03:12.720 CC lib/ftl/ftl_rq.o 00:03:12.977 CC lib/vhost/rte_vhost_user.o 00:03:12.977 CC lib/ftl/ftl_reloc.o 00:03:12.977 CC lib/ftl/ftl_l2p_cache.o 00:03:12.977 CC lib/iscsi/param.o 00:03:12.977 CC lib/ftl/ftl_p2l.o 00:03:12.977 CC lib/nvmf/rdma.o 00:03:13.235 CC lib/nvmf/auth.o 00:03:13.235 CC lib/ftl/ftl_p2l_log.o 00:03:13.235 CC lib/ftl/mngt/ftl_mngt.o 00:03:13.493 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:13.493 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:13.493 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:13.493 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:13.493 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:13.751 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:13.751 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:13.751 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:13.751 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:13.751 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:14.009 CC lib/iscsi/portal_grp.o 00:03:14.009 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:14.009 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:14.009 CC lib/ftl/utils/ftl_conf.o 00:03:14.009 CC lib/iscsi/tgt_node.o 00:03:14.009 CC lib/iscsi/iscsi_subsystem.o 00:03:14.009 LIB libspdk_vhost.a 00:03:14.009 CC lib/ftl/utils/ftl_md.o 00:03:14.009 CC lib/ftl/utils/ftl_mempool.o 00:03:14.009 SO libspdk_vhost.so.8.0 00:03:14.009 CC lib/iscsi/iscsi_rpc.o 00:03:14.266 CC lib/iscsi/task.o 00:03:14.266 SYMLINK libspdk_vhost.so 00:03:14.266 CC lib/ftl/utils/ftl_bitmap.o 00:03:14.266 CC lib/ftl/utils/ftl_property.o 00:03:14.266 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:14.266 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:14.524 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:14.524 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:14.524 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:14.524 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:14.524 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:14.524 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:14.524 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:14.524 LIB libspdk_iscsi.a 00:03:14.782 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:14.782 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:14.782 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:14.782 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:14.782 CC lib/ftl/base/ftl_base_dev.o 00:03:14.782 SO libspdk_iscsi.so.8.0 00:03:14.782 CC lib/ftl/base/ftl_base_bdev.o 00:03:14.782 CC lib/ftl/ftl_trace.o 00:03:14.782 SYMLINK libspdk_iscsi.so 00:03:15.040 LIB libspdk_ftl.a 00:03:15.297 SO libspdk_ftl.so.9.0 00:03:15.297 LIB libspdk_nvmf.a 00:03:15.555 SO libspdk_nvmf.so.20.0 00:03:15.555 SYMLINK libspdk_ftl.so 00:03:15.555 SYMLINK libspdk_nvmf.so 00:03:16.119 CC module/env_dpdk/env_dpdk_rpc.o 00:03:16.119 CC module/blob/bdev/blob_bdev.o 00:03:16.119 CC module/scheduler/gscheduler/gscheduler.o 00:03:16.119 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:16.119 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:16.119 CC module/fsdev/aio/fsdev_aio.o 00:03:16.119 CC module/accel/error/accel_error.o 00:03:16.119 CC module/keyring/file/keyring.o 00:03:16.119 CC module/accel/ioat/accel_ioat.o 00:03:16.119 CC module/sock/posix/posix.o 00:03:16.119 LIB libspdk_env_dpdk_rpc.a 00:03:16.119 SO libspdk_env_dpdk_rpc.so.6.0 00:03:16.119 LIB libspdk_scheduler_dpdk_governor.a 00:03:16.119 CC module/keyring/file/keyring_rpc.o 00:03:16.119 LIB libspdk_scheduler_gscheduler.a 00:03:16.119 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:16.119 SYMLINK libspdk_env_dpdk_rpc.so 00:03:16.119 SO libspdk_scheduler_gscheduler.so.4.0 00:03:16.377 LIB libspdk_scheduler_dynamic.a 00:03:16.377 CC module/accel/error/accel_error_rpc.o 00:03:16.377 SO libspdk_scheduler_dynamic.so.4.0 00:03:16.377 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:16.377 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:16.377 LIB libspdk_blob_bdev.a 00:03:16.377 SYMLINK libspdk_scheduler_gscheduler.so 00:03:16.377 CC module/accel/ioat/accel_ioat_rpc.o 00:03:16.377 CC module/fsdev/aio/linux_aio_mgr.o 00:03:16.377 SO libspdk_blob_bdev.so.11.0 00:03:16.377 SYMLINK libspdk_scheduler_dynamic.so 00:03:16.377 LIB libspdk_keyring_file.a 00:03:16.377 SYMLINK libspdk_blob_bdev.so 00:03:16.377 SO libspdk_keyring_file.so.2.0 00:03:16.377 CC module/keyring/linux/keyring.o 00:03:16.377 LIB libspdk_accel_error.a 00:03:16.377 SO libspdk_accel_error.so.2.0 00:03:16.377 SYMLINK libspdk_keyring_file.so 00:03:16.377 LIB libspdk_accel_ioat.a 00:03:16.377 CC module/keyring/linux/keyring_rpc.o 00:03:16.635 SO libspdk_accel_ioat.so.6.0 00:03:16.635 SYMLINK libspdk_accel_error.so 00:03:16.635 CC module/accel/dsa/accel_dsa.o 00:03:16.635 CC module/accel/dsa/accel_dsa_rpc.o 00:03:16.635 SYMLINK libspdk_accel_ioat.so 00:03:16.635 CC module/accel/iaa/accel_iaa.o 00:03:16.635 LIB libspdk_keyring_linux.a 00:03:16.635 SO libspdk_keyring_linux.so.1.0 00:03:16.892 LIB libspdk_fsdev_aio.a 00:03:16.892 SYMLINK libspdk_keyring_linux.so 00:03:16.892 CC module/bdev/error/vbdev_error.o 00:03:16.892 SO libspdk_fsdev_aio.so.1.0 00:03:16.892 CC module/bdev/delay/vbdev_delay.o 00:03:16.892 CC module/bdev/gpt/gpt.o 00:03:16.892 CC module/blobfs/bdev/blobfs_bdev.o 00:03:16.892 LIB libspdk_sock_posix.a 00:03:16.892 LIB libspdk_accel_dsa.a 00:03:16.892 SO libspdk_sock_posix.so.6.0 00:03:16.892 SYMLINK libspdk_fsdev_aio.so 00:03:16.892 CC module/accel/iaa/accel_iaa_rpc.o 00:03:16.892 SO libspdk_accel_dsa.so.5.0 00:03:16.892 CC module/bdev/lvol/vbdev_lvol.o 00:03:16.892 CC module/bdev/gpt/vbdev_gpt.o 00:03:16.892 CC module/bdev/malloc/bdev_malloc.o 00:03:16.892 SYMLINK libspdk_sock_posix.so 00:03:16.892 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:16.892 SYMLINK libspdk_accel_dsa.so 00:03:16.892 CC module/bdev/error/vbdev_error_rpc.o 00:03:17.150 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:17.150 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:17.150 LIB libspdk_accel_iaa.a 00:03:17.150 SO libspdk_accel_iaa.so.3.0 00:03:17.150 LIB libspdk_bdev_error.a 00:03:17.150 SYMLINK libspdk_accel_iaa.so 00:03:17.150 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:17.150 SO libspdk_bdev_error.so.6.0 00:03:17.150 LIB libspdk_bdev_gpt.a 00:03:17.150 LIB libspdk_blobfs_bdev.a 00:03:17.150 LIB libspdk_bdev_delay.a 00:03:17.150 SO libspdk_bdev_gpt.so.6.0 00:03:17.150 CC module/bdev/null/bdev_null.o 00:03:17.150 SYMLINK libspdk_bdev_error.so 00:03:17.150 CC module/bdev/null/bdev_null_rpc.o 00:03:17.150 SO libspdk_blobfs_bdev.so.6.0 00:03:17.407 SO libspdk_bdev_delay.so.6.0 00:03:17.407 SYMLINK libspdk_bdev_gpt.so 00:03:17.407 SYMLINK libspdk_blobfs_bdev.so 00:03:17.407 SYMLINK libspdk_bdev_delay.so 00:03:17.407 LIB libspdk_bdev_malloc.a 00:03:17.407 SO libspdk_bdev_malloc.so.6.0 00:03:17.407 CC module/bdev/nvme/bdev_nvme.o 00:03:17.407 LIB libspdk_bdev_lvol.a 00:03:17.407 SYMLINK libspdk_bdev_malloc.so 00:03:17.407 SO libspdk_bdev_lvol.so.6.0 00:03:17.407 CC module/bdev/split/vbdev_split.o 00:03:17.407 CC module/bdev/passthru/vbdev_passthru.o 00:03:17.407 CC module/bdev/raid/bdev_raid.o 00:03:17.664 CC module/bdev/aio/bdev_aio.o 00:03:17.664 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:17.664 LIB libspdk_bdev_null.a 00:03:17.664 SYMLINK libspdk_bdev_lvol.so 00:03:17.664 CC module/bdev/raid/bdev_raid_rpc.o 00:03:17.664 SO libspdk_bdev_null.so.6.0 00:03:17.664 SYMLINK libspdk_bdev_null.so 00:03:17.664 CC module/bdev/raid/bdev_raid_sb.o 00:03:17.664 CC module/bdev/ftl/bdev_ftl.o 00:03:17.664 CC module/bdev/iscsi/bdev_iscsi.o 00:03:17.664 CC module/bdev/split/vbdev_split_rpc.o 00:03:17.921 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:17.921 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:17.921 CC module/bdev/aio/bdev_aio_rpc.o 00:03:17.921 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:17.921 CC module/bdev/raid/raid0.o 00:03:17.921 LIB libspdk_bdev_split.a 00:03:17.921 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:17.921 SO libspdk_bdev_split.so.6.0 00:03:17.921 CC module/bdev/raid/raid1.o 00:03:17.921 LIB libspdk_bdev_passthru.a 00:03:18.179 SYMLINK libspdk_bdev_split.so 00:03:18.179 LIB libspdk_bdev_iscsi.a 00:03:18.179 SO libspdk_bdev_passthru.so.6.0 00:03:18.179 LIB libspdk_bdev_aio.a 00:03:18.179 SO libspdk_bdev_iscsi.so.6.0 00:03:18.179 LIB libspdk_bdev_zone_block.a 00:03:18.179 SO libspdk_bdev_aio.so.6.0 00:03:18.179 SO libspdk_bdev_zone_block.so.6.0 00:03:18.179 SYMLINK libspdk_bdev_passthru.so 00:03:18.179 CC module/bdev/raid/concat.o 00:03:18.179 SYMLINK libspdk_bdev_iscsi.so 00:03:18.179 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:18.179 SYMLINK libspdk_bdev_aio.so 00:03:18.179 CC module/bdev/nvme/nvme_rpc.o 00:03:18.179 SYMLINK libspdk_bdev_zone_block.so 00:03:18.179 CC module/bdev/nvme/bdev_mdns_client.o 00:03:18.179 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:18.179 LIB libspdk_bdev_ftl.a 00:03:18.179 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:18.179 SO libspdk_bdev_ftl.so.6.0 00:03:18.436 CC module/bdev/nvme/vbdev_opal.o 00:03:18.436 SYMLINK libspdk_bdev_ftl.so 00:03:18.436 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:18.436 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:18.436 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:18.436 LIB libspdk_bdev_raid.a 00:03:18.707 SO libspdk_bdev_raid.so.6.0 00:03:18.707 SYMLINK libspdk_bdev_raid.so 00:03:18.707 LIB libspdk_bdev_virtio.a 00:03:18.707 SO libspdk_bdev_virtio.so.6.0 00:03:18.969 SYMLINK libspdk_bdev_virtio.so 00:03:19.974 LIB libspdk_bdev_nvme.a 00:03:20.232 SO libspdk_bdev_nvme.so.7.1 00:03:20.232 SYMLINK libspdk_bdev_nvme.so 00:03:20.797 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:20.797 CC module/event/subsystems/iobuf/iobuf.o 00:03:20.797 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:20.797 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:20.797 CC module/event/subsystems/vmd/vmd.o 00:03:20.797 CC module/event/subsystems/keyring/keyring.o 00:03:20.797 CC module/event/subsystems/scheduler/scheduler.o 00:03:20.797 CC module/event/subsystems/sock/sock.o 00:03:20.797 CC module/event/subsystems/fsdev/fsdev.o 00:03:20.797 LIB libspdk_event_vhost_blk.a 00:03:21.055 LIB libspdk_event_keyring.a 00:03:21.055 SO libspdk_event_vhost_blk.so.3.0 00:03:21.055 LIB libspdk_event_sock.a 00:03:21.055 LIB libspdk_event_scheduler.a 00:03:21.055 SO libspdk_event_keyring.so.1.0 00:03:21.055 LIB libspdk_event_iobuf.a 00:03:21.055 LIB libspdk_event_vmd.a 00:03:21.055 SO libspdk_event_scheduler.so.4.0 00:03:21.055 SO libspdk_event_sock.so.5.0 00:03:21.055 SO libspdk_event_iobuf.so.3.0 00:03:21.055 SO libspdk_event_vmd.so.6.0 00:03:21.055 SYMLINK libspdk_event_vhost_blk.so 00:03:21.055 LIB libspdk_event_fsdev.a 00:03:21.055 SYMLINK libspdk_event_keyring.so 00:03:21.055 SYMLINK libspdk_event_sock.so 00:03:21.055 SO libspdk_event_fsdev.so.1.0 00:03:21.055 SYMLINK libspdk_event_scheduler.so 00:03:21.055 SYMLINK libspdk_event_iobuf.so 00:03:21.055 SYMLINK libspdk_event_vmd.so 00:03:21.055 SYMLINK libspdk_event_fsdev.so 00:03:21.312 CC module/event/subsystems/accel/accel.o 00:03:21.570 LIB libspdk_event_accel.a 00:03:21.570 SO libspdk_event_accel.so.6.0 00:03:21.570 SYMLINK libspdk_event_accel.so 00:03:21.827 CC module/event/subsystems/bdev/bdev.o 00:03:22.084 LIB libspdk_event_bdev.a 00:03:22.084 SO libspdk_event_bdev.so.6.0 00:03:22.084 SYMLINK libspdk_event_bdev.so 00:03:22.342 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:22.342 CC module/event/subsystems/ublk/ublk.o 00:03:22.342 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:22.342 CC module/event/subsystems/nbd/nbd.o 00:03:22.342 CC module/event/subsystems/scsi/scsi.o 00:03:22.599 LIB libspdk_event_ublk.a 00:03:22.599 LIB libspdk_event_nbd.a 00:03:22.599 LIB libspdk_event_scsi.a 00:03:22.599 SO libspdk_event_ublk.so.3.0 00:03:22.599 SO libspdk_event_nbd.so.6.0 00:03:22.599 SO libspdk_event_scsi.so.6.0 00:03:22.599 SYMLINK libspdk_event_ublk.so 00:03:22.599 LIB libspdk_event_nvmf.a 00:03:22.599 SYMLINK libspdk_event_nbd.so 00:03:22.599 SYMLINK libspdk_event_scsi.so 00:03:22.599 SO libspdk_event_nvmf.so.6.0 00:03:22.856 SYMLINK libspdk_event_nvmf.so 00:03:22.856 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:22.856 CC module/event/subsystems/iscsi/iscsi.o 00:03:23.113 LIB libspdk_event_vhost_scsi.a 00:03:23.113 SO libspdk_event_vhost_scsi.so.3.0 00:03:23.113 LIB libspdk_event_iscsi.a 00:03:23.113 SO libspdk_event_iscsi.so.6.0 00:03:23.113 SYMLINK libspdk_event_vhost_scsi.so 00:03:23.371 SYMLINK libspdk_event_iscsi.so 00:03:23.371 SO libspdk.so.6.0 00:03:23.371 SYMLINK libspdk.so 00:03:23.629 TEST_HEADER include/spdk/accel.h 00:03:23.629 CXX app/trace/trace.o 00:03:23.629 CC test/rpc_client/rpc_client_test.o 00:03:23.629 TEST_HEADER include/spdk/accel_module.h 00:03:23.629 TEST_HEADER include/spdk/assert.h 00:03:23.629 TEST_HEADER include/spdk/barrier.h 00:03:23.629 TEST_HEADER include/spdk/base64.h 00:03:23.629 TEST_HEADER include/spdk/bdev.h 00:03:23.629 TEST_HEADER include/spdk/bdev_module.h 00:03:23.629 CC app/trace_record/trace_record.o 00:03:23.629 TEST_HEADER include/spdk/bdev_zone.h 00:03:23.629 TEST_HEADER include/spdk/bit_array.h 00:03:23.629 TEST_HEADER include/spdk/bit_pool.h 00:03:23.629 TEST_HEADER include/spdk/blob_bdev.h 00:03:23.629 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:23.629 TEST_HEADER include/spdk/blobfs.h 00:03:23.629 TEST_HEADER include/spdk/blob.h 00:03:23.629 TEST_HEADER include/spdk/conf.h 00:03:23.629 TEST_HEADER include/spdk/config.h 00:03:23.629 TEST_HEADER include/spdk/cpuset.h 00:03:23.629 CC app/nvmf_tgt/nvmf_main.o 00:03:23.629 TEST_HEADER include/spdk/crc16.h 00:03:23.629 TEST_HEADER include/spdk/crc32.h 00:03:23.629 TEST_HEADER include/spdk/crc64.h 00:03:23.629 TEST_HEADER include/spdk/dif.h 00:03:23.629 TEST_HEADER include/spdk/dma.h 00:03:23.629 TEST_HEADER include/spdk/endian.h 00:03:23.629 TEST_HEADER include/spdk/env_dpdk.h 00:03:23.629 TEST_HEADER include/spdk/env.h 00:03:23.629 TEST_HEADER include/spdk/event.h 00:03:23.629 TEST_HEADER include/spdk/fd_group.h 00:03:23.629 TEST_HEADER include/spdk/fd.h 00:03:23.629 TEST_HEADER include/spdk/file.h 00:03:23.629 TEST_HEADER include/spdk/fsdev.h 00:03:23.629 TEST_HEADER include/spdk/fsdev_module.h 00:03:23.887 TEST_HEADER include/spdk/ftl.h 00:03:23.887 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:23.887 TEST_HEADER include/spdk/gpt_spec.h 00:03:23.887 TEST_HEADER include/spdk/hexlify.h 00:03:23.887 TEST_HEADER include/spdk/histogram_data.h 00:03:23.887 TEST_HEADER include/spdk/idxd.h 00:03:23.887 TEST_HEADER include/spdk/idxd_spec.h 00:03:23.887 TEST_HEADER include/spdk/init.h 00:03:23.887 TEST_HEADER include/spdk/ioat.h 00:03:23.887 TEST_HEADER include/spdk/ioat_spec.h 00:03:23.887 CC test/thread/poller_perf/poller_perf.o 00:03:23.887 TEST_HEADER include/spdk/iscsi_spec.h 00:03:23.887 TEST_HEADER include/spdk/json.h 00:03:23.887 CC examples/util/zipf/zipf.o 00:03:23.887 TEST_HEADER include/spdk/jsonrpc.h 00:03:23.887 TEST_HEADER include/spdk/keyring.h 00:03:23.887 TEST_HEADER include/spdk/keyring_module.h 00:03:23.887 TEST_HEADER include/spdk/likely.h 00:03:23.887 TEST_HEADER include/spdk/log.h 00:03:23.887 TEST_HEADER include/spdk/lvol.h 00:03:23.887 TEST_HEADER include/spdk/md5.h 00:03:23.887 TEST_HEADER include/spdk/memory.h 00:03:23.887 TEST_HEADER include/spdk/mmio.h 00:03:23.887 TEST_HEADER include/spdk/nbd.h 00:03:23.887 TEST_HEADER include/spdk/net.h 00:03:23.887 TEST_HEADER include/spdk/notify.h 00:03:23.887 CC test/dma/test_dma/test_dma.o 00:03:23.887 TEST_HEADER include/spdk/nvme.h 00:03:23.887 TEST_HEADER include/spdk/nvme_intel.h 00:03:23.887 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:23.887 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:23.887 TEST_HEADER include/spdk/nvme_spec.h 00:03:23.887 TEST_HEADER include/spdk/nvme_zns.h 00:03:23.887 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:23.887 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:23.887 TEST_HEADER include/spdk/nvmf.h 00:03:23.887 TEST_HEADER include/spdk/nvmf_spec.h 00:03:23.887 TEST_HEADER include/spdk/nvmf_transport.h 00:03:23.887 CC test/app/bdev_svc/bdev_svc.o 00:03:23.887 TEST_HEADER include/spdk/opal.h 00:03:23.887 TEST_HEADER include/spdk/opal_spec.h 00:03:23.887 TEST_HEADER include/spdk/pci_ids.h 00:03:23.887 TEST_HEADER include/spdk/pipe.h 00:03:23.887 TEST_HEADER include/spdk/queue.h 00:03:23.887 TEST_HEADER include/spdk/reduce.h 00:03:23.887 TEST_HEADER include/spdk/rpc.h 00:03:23.887 TEST_HEADER include/spdk/scheduler.h 00:03:23.887 TEST_HEADER include/spdk/scsi.h 00:03:23.887 TEST_HEADER include/spdk/scsi_spec.h 00:03:23.887 TEST_HEADER include/spdk/sock.h 00:03:23.887 TEST_HEADER include/spdk/stdinc.h 00:03:23.887 TEST_HEADER include/spdk/string.h 00:03:23.887 TEST_HEADER include/spdk/thread.h 00:03:23.887 TEST_HEADER include/spdk/trace.h 00:03:23.887 CC test/env/mem_callbacks/mem_callbacks.o 00:03:23.887 TEST_HEADER include/spdk/trace_parser.h 00:03:23.887 TEST_HEADER include/spdk/tree.h 00:03:23.887 TEST_HEADER include/spdk/ublk.h 00:03:23.887 TEST_HEADER include/spdk/util.h 00:03:23.887 TEST_HEADER include/spdk/uuid.h 00:03:23.887 TEST_HEADER include/spdk/version.h 00:03:23.887 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:23.887 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:23.887 LINK rpc_client_test 00:03:23.887 TEST_HEADER include/spdk/vhost.h 00:03:23.887 TEST_HEADER include/spdk/vmd.h 00:03:23.887 TEST_HEADER include/spdk/xor.h 00:03:23.887 TEST_HEADER include/spdk/zipf.h 00:03:23.887 CXX test/cpp_headers/accel.o 00:03:23.887 LINK nvmf_tgt 00:03:23.887 LINK poller_perf 00:03:23.888 LINK zipf 00:03:24.145 LINK spdk_trace_record 00:03:24.145 CXX test/cpp_headers/accel_module.o 00:03:24.145 LINK bdev_svc 00:03:24.145 LINK spdk_trace 00:03:24.145 CC test/env/vtophys/vtophys.o 00:03:24.402 CC app/iscsi_tgt/iscsi_tgt.o 00:03:24.402 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:24.402 CXX test/cpp_headers/assert.o 00:03:24.402 CXX test/cpp_headers/barrier.o 00:03:24.402 CC app/spdk_tgt/spdk_tgt.o 00:03:24.402 CC examples/ioat/perf/perf.o 00:03:24.402 LINK vtophys 00:03:24.402 LINK test_dma 00:03:24.402 CC test/app/histogram_perf/histogram_perf.o 00:03:24.660 CXX test/cpp_headers/base64.o 00:03:24.660 CXX test/cpp_headers/bdev.o 00:03:24.660 LINK iscsi_tgt 00:03:24.660 LINK mem_callbacks 00:03:24.660 LINK spdk_tgt 00:03:24.660 LINK ioat_perf 00:03:24.660 LINK histogram_perf 00:03:24.660 CC examples/vmd/lsvmd/lsvmd.o 00:03:24.660 LINK nvme_fuzz 00:03:24.660 CXX test/cpp_headers/bdev_module.o 00:03:24.918 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:24.918 CC examples/vmd/led/led.o 00:03:24.918 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:24.918 CXX test/cpp_headers/bdev_zone.o 00:03:24.918 CC examples/ioat/verify/verify.o 00:03:24.918 LINK lsvmd 00:03:24.918 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:24.918 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:24.918 CC app/spdk_lspci/spdk_lspci.o 00:03:24.918 LINK led 00:03:24.918 LINK env_dpdk_post_init 00:03:24.918 CXX test/cpp_headers/bit_array.o 00:03:25.177 CC app/spdk_nvme_perf/perf.o 00:03:25.177 LINK spdk_lspci 00:03:25.177 LINK verify 00:03:25.177 CXX test/cpp_headers/bit_pool.o 00:03:25.177 CC app/spdk_nvme_identify/identify.o 00:03:25.177 CC app/spdk_nvme_discover/discovery_aer.o 00:03:25.177 CC test/env/memory/memory_ut.o 00:03:25.433 CXX test/cpp_headers/blob_bdev.o 00:03:25.433 LINK vhost_fuzz 00:03:25.433 CC test/app/jsoncat/jsoncat.o 00:03:25.433 CC test/env/pci/pci_ut.o 00:03:25.433 LINK spdk_nvme_discover 00:03:25.433 CC examples/idxd/perf/perf.o 00:03:25.433 LINK jsoncat 00:03:25.433 CXX test/cpp_headers/blobfs_bdev.o 00:03:25.690 CC test/app/stub/stub.o 00:03:25.690 LINK idxd_perf 00:03:25.690 CXX test/cpp_headers/blobfs.o 00:03:25.948 LINK pci_ut 00:03:25.948 CC test/event/event_perf/event_perf.o 00:03:25.948 LINK stub 00:03:25.948 LINK spdk_nvme_perf 00:03:25.948 LINK spdk_nvme_identify 00:03:25.948 CXX test/cpp_headers/blob.o 00:03:25.948 CC app/spdk_top/spdk_top.o 00:03:25.948 LINK event_perf 00:03:26.205 CC app/vhost/vhost.o 00:03:26.205 CXX test/cpp_headers/conf.o 00:03:26.205 CC test/event/reactor/reactor.o 00:03:26.205 CC test/nvme/aer/aer.o 00:03:26.205 CC test/accel/dif/dif.o 00:03:26.463 CXX test/cpp_headers/config.o 00:03:26.463 LINK vhost 00:03:26.463 CC test/blobfs/mkfs/mkfs.o 00:03:26.463 CXX test/cpp_headers/cpuset.o 00:03:26.463 LINK reactor 00:03:26.463 LINK memory_ut 00:03:26.463 CXX test/cpp_headers/crc16.o 00:03:26.463 LINK mkfs 00:03:26.720 LINK aer 00:03:26.720 CC test/event/reactor_perf/reactor_perf.o 00:03:26.720 CC test/event/app_repeat/app_repeat.o 00:03:26.720 LINK iscsi_fuzz 00:03:26.720 CXX test/cpp_headers/crc32.o 00:03:26.978 LINK reactor_perf 00:03:26.978 CC test/event/scheduler/scheduler.o 00:03:26.978 LINK app_repeat 00:03:26.978 CC test/nvme/reset/reset.o 00:03:26.978 CXX test/cpp_headers/crc64.o 00:03:26.978 LINK spdk_top 00:03:27.236 LINK dif 00:03:27.236 LINK scheduler 00:03:27.236 CC test/lvol/esnap/esnap.o 00:03:27.236 CXX test/cpp_headers/dif.o 00:03:27.236 CC test/nvme/sgl/sgl.o 00:03:27.236 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:27.236 CC test/nvme/e2edp/nvme_dp.o 00:03:27.236 LINK reset 00:03:27.236 CXX test/cpp_headers/dma.o 00:03:27.494 CC app/spdk_dd/spdk_dd.o 00:03:27.494 LINK interrupt_tgt 00:03:27.494 CXX test/cpp_headers/endian.o 00:03:27.494 LINK sgl 00:03:27.494 LINK nvme_dp 00:03:27.753 CC examples/thread/thread/thread_ex.o 00:03:27.753 CC examples/sock/hello_world/hello_sock.o 00:03:27.753 CC test/bdev/bdevio/bdevio.o 00:03:27.753 CXX test/cpp_headers/env_dpdk.o 00:03:27.753 LINK spdk_dd 00:03:28.011 CC app/fio/nvme/fio_plugin.o 00:03:28.011 CC test/nvme/overhead/overhead.o 00:03:28.011 CC test/nvme/err_injection/err_injection.o 00:03:28.011 LINK thread 00:03:28.011 CXX test/cpp_headers/env.o 00:03:28.011 LINK hello_sock 00:03:28.011 CXX test/cpp_headers/event.o 00:03:28.011 LINK bdevio 00:03:28.269 CXX test/cpp_headers/fd_group.o 00:03:28.269 LINK err_injection 00:03:28.269 LINK overhead 00:03:28.269 CC test/nvme/reserve/reserve.o 00:03:28.269 CC test/nvme/startup/startup.o 00:03:28.527 CXX test/cpp_headers/fd.o 00:03:28.527 CC examples/nvme/hello_world/hello_world.o 00:03:28.527 CC app/fio/bdev/fio_plugin.o 00:03:28.527 LINK spdk_nvme 00:03:28.527 LINK startup 00:03:28.527 LINK reserve 00:03:28.527 CC test/nvme/simple_copy/simple_copy.o 00:03:28.785 CXX test/cpp_headers/file.o 00:03:28.785 CC examples/nvme/reconnect/reconnect.o 00:03:28.785 LINK hello_world 00:03:28.785 CXX test/cpp_headers/fsdev.o 00:03:28.785 CXX test/cpp_headers/fsdev_module.o 00:03:28.785 CXX test/cpp_headers/ftl.o 00:03:29.043 CXX test/cpp_headers/fuse_dispatcher.o 00:03:29.043 LINK simple_copy 00:03:29.043 CXX test/cpp_headers/gpt_spec.o 00:03:29.043 CC examples/accel/perf/accel_perf.o 00:03:29.043 LINK reconnect 00:03:29.043 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:29.301 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:29.301 LINK spdk_bdev 00:03:29.301 CXX test/cpp_headers/hexlify.o 00:03:29.301 CC test/nvme/connect_stress/connect_stress.o 00:03:29.301 CC test/nvme/boot_partition/boot_partition.o 00:03:29.301 CC examples/nvme/arbitration/arbitration.o 00:03:29.558 CXX test/cpp_headers/histogram_data.o 00:03:29.558 CC examples/nvme/hotplug/hotplug.o 00:03:29.558 LINK hello_fsdev 00:03:29.558 LINK boot_partition 00:03:29.558 LINK connect_stress 00:03:29.816 CXX test/cpp_headers/idxd.o 00:03:29.816 LINK hotplug 00:03:29.816 LINK accel_perf 00:03:29.816 LINK arbitration 00:03:29.816 LINK nvme_manage 00:03:30.075 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:30.075 CC test/nvme/compliance/nvme_compliance.o 00:03:30.075 CC examples/nvme/abort/abort.o 00:03:30.075 CXX test/cpp_headers/idxd_spec.o 00:03:30.334 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:30.334 CXX test/cpp_headers/init.o 00:03:30.334 LINK cmb_copy 00:03:30.334 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:30.334 CC test/nvme/fused_ordering/fused_ordering.o 00:03:30.593 LINK abort 00:03:30.593 LINK nvme_compliance 00:03:30.593 CC test/nvme/fdp/fdp.o 00:03:30.593 CXX test/cpp_headers/ioat.o 00:03:30.593 LINK pmr_persistence 00:03:30.593 LINK doorbell_aers 00:03:30.593 LINK fused_ordering 00:03:30.850 CXX test/cpp_headers/ioat_spec.o 00:03:30.850 CXX test/cpp_headers/iscsi_spec.o 00:03:30.850 CC test/nvme/cuse/cuse.o 00:03:30.850 CC examples/blob/hello_world/hello_blob.o 00:03:30.850 CXX test/cpp_headers/json.o 00:03:30.850 CXX test/cpp_headers/jsonrpc.o 00:03:30.850 LINK fdp 00:03:31.108 CC examples/blob/cli/blobcli.o 00:03:31.108 CXX test/cpp_headers/keyring.o 00:03:31.108 LINK hello_blob 00:03:31.108 CC examples/bdev/hello_world/hello_bdev.o 00:03:31.108 CXX test/cpp_headers/keyring_module.o 00:03:31.108 CXX test/cpp_headers/likely.o 00:03:31.366 CC examples/bdev/bdevperf/bdevperf.o 00:03:31.366 CXX test/cpp_headers/log.o 00:03:31.366 CXX test/cpp_headers/lvol.o 00:03:31.366 CXX test/cpp_headers/md5.o 00:03:31.366 LINK hello_bdev 00:03:31.623 CXX test/cpp_headers/memory.o 00:03:31.623 CXX test/cpp_headers/mmio.o 00:03:31.623 CXX test/cpp_headers/nbd.o 00:03:31.623 LINK blobcli 00:03:31.623 CXX test/cpp_headers/net.o 00:03:31.623 CXX test/cpp_headers/notify.o 00:03:31.623 CXX test/cpp_headers/nvme.o 00:03:31.623 CXX test/cpp_headers/nvme_intel.o 00:03:31.881 CXX test/cpp_headers/nvme_ocssd.o 00:03:31.881 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:31.881 CXX test/cpp_headers/nvme_spec.o 00:03:31.881 CXX test/cpp_headers/nvme_zns.o 00:03:31.881 CXX test/cpp_headers/nvmf_cmd.o 00:03:31.881 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:31.881 CXX test/cpp_headers/nvmf.o 00:03:31.881 CXX test/cpp_headers/nvmf_spec.o 00:03:32.139 CXX test/cpp_headers/nvmf_transport.o 00:03:32.139 CXX test/cpp_headers/opal.o 00:03:32.139 CXX test/cpp_headers/opal_spec.o 00:03:32.139 CXX test/cpp_headers/pci_ids.o 00:03:32.139 LINK bdevperf 00:03:32.139 CXX test/cpp_headers/pipe.o 00:03:32.139 CXX test/cpp_headers/queue.o 00:03:32.396 CXX test/cpp_headers/reduce.o 00:03:32.396 CXX test/cpp_headers/rpc.o 00:03:32.396 CXX test/cpp_headers/scheduler.o 00:03:32.396 CXX test/cpp_headers/scsi.o 00:03:32.396 CXX test/cpp_headers/scsi_spec.o 00:03:32.396 CXX test/cpp_headers/sock.o 00:03:32.396 LINK cuse 00:03:32.396 CXX test/cpp_headers/stdinc.o 00:03:32.653 CXX test/cpp_headers/string.o 00:03:32.653 CXX test/cpp_headers/thread.o 00:03:32.653 CXX test/cpp_headers/trace.o 00:03:32.653 CXX test/cpp_headers/trace_parser.o 00:03:32.653 CXX test/cpp_headers/tree.o 00:03:32.653 CXX test/cpp_headers/ublk.o 00:03:32.653 CC examples/nvmf/nvmf/nvmf.o 00:03:32.653 CXX test/cpp_headers/util.o 00:03:32.653 CXX test/cpp_headers/uuid.o 00:03:32.910 CXX test/cpp_headers/version.o 00:03:32.910 CXX test/cpp_headers/vfio_user_pci.o 00:03:32.910 CXX test/cpp_headers/vfio_user_spec.o 00:03:32.910 CXX test/cpp_headers/vhost.o 00:03:32.910 CXX test/cpp_headers/vmd.o 00:03:32.910 CXX test/cpp_headers/xor.o 00:03:32.910 CXX test/cpp_headers/zipf.o 00:03:33.168 LINK nvmf 00:03:33.168 LINK esnap 00:03:33.734 00:03:33.735 real 1m30.195s 00:03:33.735 user 8m30.352s 00:03:33.735 sys 1m46.218s 00:03:33.735 17:02:25 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:33.735 ************************************ 00:03:33.735 END TEST make 00:03:33.735 ************************************ 00:03:33.735 17:02:25 make -- common/autotest_common.sh@10 -- $ set +x 00:03:33.735 17:02:26 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:33.735 17:02:26 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:33.735 17:02:26 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:33.735 17:02:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.735 17:02:26 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:33.735 17:02:26 -- pm/common@44 -- $ pid=5302 00:03:33.735 17:02:26 -- pm/common@50 -- $ kill -TERM 5302 00:03:33.735 17:02:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.735 17:02:26 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:33.735 17:02:26 -- pm/common@44 -- $ pid=5304 00:03:33.735 17:02:26 -- pm/common@50 -- $ kill -TERM 5304 00:03:33.735 17:02:26 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:33.735 17:02:26 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:33.735 17:02:26 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:33.735 17:02:26 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:33.735 17:02:26 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:33.735 17:02:26 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:33.735 17:02:26 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:33.735 17:02:26 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:33.735 17:02:26 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:33.735 17:02:26 -- scripts/common.sh@336 -- # IFS=.-: 00:03:33.735 17:02:26 -- scripts/common.sh@336 -- # read -ra ver1 00:03:33.735 17:02:26 -- scripts/common.sh@337 -- # IFS=.-: 00:03:33.735 17:02:26 -- scripts/common.sh@337 -- # read -ra ver2 00:03:33.735 17:02:26 -- scripts/common.sh@338 -- # local 'op=<' 00:03:33.735 17:02:26 -- scripts/common.sh@340 -- # ver1_l=2 00:03:33.735 17:02:26 -- scripts/common.sh@341 -- # ver2_l=1 00:03:33.735 17:02:26 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:33.735 17:02:26 -- scripts/common.sh@344 -- # case "$op" in 00:03:33.735 17:02:26 -- scripts/common.sh@345 -- # : 1 00:03:33.735 17:02:26 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:33.735 17:02:26 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:33.735 17:02:26 -- scripts/common.sh@365 -- # decimal 1 00:03:33.735 17:02:26 -- scripts/common.sh@353 -- # local d=1 00:03:33.735 17:02:26 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:33.735 17:02:26 -- scripts/common.sh@355 -- # echo 1 00:03:33.735 17:02:26 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:33.735 17:02:26 -- scripts/common.sh@366 -- # decimal 2 00:03:33.735 17:02:26 -- scripts/common.sh@353 -- # local d=2 00:03:33.735 17:02:26 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:33.735 17:02:26 -- scripts/common.sh@355 -- # echo 2 00:03:33.735 17:02:26 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:33.735 17:02:26 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:33.735 17:02:26 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:33.735 17:02:26 -- scripts/common.sh@368 -- # return 0 00:03:33.735 17:02:26 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:33.735 17:02:26 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:33.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.735 --rc genhtml_branch_coverage=1 00:03:33.735 --rc genhtml_function_coverage=1 00:03:33.735 --rc genhtml_legend=1 00:03:33.735 --rc geninfo_all_blocks=1 00:03:33.735 --rc geninfo_unexecuted_blocks=1 00:03:33.735 00:03:33.735 ' 00:03:33.735 17:02:26 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:33.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.735 --rc genhtml_branch_coverage=1 00:03:33.735 --rc genhtml_function_coverage=1 00:03:33.735 --rc genhtml_legend=1 00:03:33.735 --rc geninfo_all_blocks=1 00:03:33.735 --rc geninfo_unexecuted_blocks=1 00:03:33.735 00:03:33.735 ' 00:03:33.735 17:02:26 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:33.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.735 --rc genhtml_branch_coverage=1 00:03:33.735 --rc genhtml_function_coverage=1 00:03:33.735 --rc genhtml_legend=1 00:03:33.735 --rc geninfo_all_blocks=1 00:03:33.735 --rc geninfo_unexecuted_blocks=1 00:03:33.735 00:03:33.735 ' 00:03:33.735 17:02:26 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:33.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.735 --rc genhtml_branch_coverage=1 00:03:33.735 --rc genhtml_function_coverage=1 00:03:33.735 --rc genhtml_legend=1 00:03:33.735 --rc geninfo_all_blocks=1 00:03:33.735 --rc geninfo_unexecuted_blocks=1 00:03:33.735 00:03:33.735 ' 00:03:33.735 17:02:26 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:33.735 17:02:26 -- nvmf/common.sh@7 -- # uname -s 00:03:33.735 17:02:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:33.735 17:02:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:33.735 17:02:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:33.735 17:02:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:33.735 17:02:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:33.735 17:02:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:33.735 17:02:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:33.735 17:02:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:33.735 17:02:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:33.735 17:02:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:33.735 17:02:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:03:33.735 17:02:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:03:33.735 17:02:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:33.735 17:02:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:33.735 17:02:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:33.735 17:02:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:33.735 17:02:26 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:33.735 17:02:26 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:33.735 17:02:26 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:33.735 17:02:26 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:33.735 17:02:26 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:33.735 17:02:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.735 17:02:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.735 17:02:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.735 17:02:26 -- paths/export.sh@5 -- # export PATH 00:03:33.735 17:02:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.735 17:02:26 -- nvmf/common.sh@51 -- # : 0 00:03:33.735 17:02:26 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:33.735 17:02:26 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:33.735 17:02:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:33.735 17:02:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:33.735 17:02:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:33.735 17:02:26 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:33.735 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:33.735 17:02:26 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:33.735 17:02:26 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:33.735 17:02:26 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:33.735 17:02:26 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:33.735 17:02:26 -- spdk/autotest.sh@32 -- # uname -s 00:03:33.735 17:02:26 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:33.735 17:02:26 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:33.735 17:02:26 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:33.735 17:02:26 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:33.735 17:02:26 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:33.735 17:02:26 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:33.994 17:02:26 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:33.994 17:02:26 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:33.994 17:02:26 -- spdk/autotest.sh@48 -- # udevadm_pid=56126 00:03:33.994 17:02:26 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:33.994 17:02:26 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:33.994 17:02:26 -- pm/common@17 -- # local monitor 00:03:33.994 17:02:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.994 17:02:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.994 17:02:26 -- pm/common@25 -- # sleep 1 00:03:33.994 17:02:26 -- pm/common@21 -- # date +%s 00:03:33.994 17:02:26 -- pm/common@21 -- # date +%s 00:03:33.994 17:02:26 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732035746 00:03:33.994 17:02:26 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732035746 00:03:33.994 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732035746_collect-vmstat.pm.log 00:03:33.994 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732035746_collect-cpu-load.pm.log 00:03:34.930 17:02:27 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:34.930 17:02:27 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:34.930 17:02:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:34.930 17:02:27 -- common/autotest_common.sh@10 -- # set +x 00:03:34.930 17:02:27 -- spdk/autotest.sh@59 -- # create_test_list 00:03:34.930 17:02:27 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:34.930 17:02:27 -- common/autotest_common.sh@10 -- # set +x 00:03:34.930 17:02:27 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:34.930 17:02:27 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:34.930 17:02:27 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:34.930 17:02:27 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:34.930 17:02:27 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:34.930 17:02:27 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:34.930 17:02:27 -- common/autotest_common.sh@1457 -- # uname 00:03:34.930 17:02:27 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:34.930 17:02:27 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:34.930 17:02:27 -- common/autotest_common.sh@1477 -- # uname 00:03:34.930 17:02:27 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:34.930 17:02:27 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:34.930 17:02:27 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:35.204 lcov: LCOV version 1.15 00:03:35.204 17:02:27 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:53.304 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:53.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:08.181 17:03:00 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:08.181 17:03:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:08.181 17:03:00 -- common/autotest_common.sh@10 -- # set +x 00:04:08.181 17:03:00 -- spdk/autotest.sh@78 -- # rm -f 00:04:08.181 17:03:00 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:08.452 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:08.452 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:08.452 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:08.452 17:03:00 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:08.452 17:03:00 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:08.452 17:03:00 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:08.452 17:03:00 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:08.452 17:03:00 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:08.452 17:03:00 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:08.452 17:03:00 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:08.452 17:03:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:08.452 17:03:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:08.452 17:03:00 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:08.452 17:03:00 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:08.452 17:03:00 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:08.452 17:03:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:08.452 17:03:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:08.452 17:03:00 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:08.452 17:03:00 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:04:08.452 17:03:00 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:08.452 17:03:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:08.452 17:03:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:08.452 17:03:00 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:08.452 17:03:00 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:04:08.452 17:03:00 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:08.452 17:03:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:08.452 17:03:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:08.452 17:03:00 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:08.452 17:03:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.452 17:03:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.452 17:03:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:08.452 17:03:00 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:08.452 17:03:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:08.452 No valid GPT data, bailing 00:04:08.452 17:03:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:08.452 17:03:00 -- scripts/common.sh@394 -- # pt= 00:04:08.452 17:03:00 -- scripts/common.sh@395 -- # return 1 00:04:08.452 17:03:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:08.452 1+0 records in 00:04:08.452 1+0 records out 00:04:08.452 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00441779 s, 237 MB/s 00:04:08.452 17:03:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.452 17:03:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.452 17:03:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:08.452 17:03:00 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:08.452 17:03:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:08.738 No valid GPT data, bailing 00:04:08.738 17:03:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:08.738 17:03:00 -- scripts/common.sh@394 -- # pt= 00:04:08.738 17:03:00 -- scripts/common.sh@395 -- # return 1 00:04:08.738 17:03:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:08.738 1+0 records in 00:04:08.738 1+0 records out 00:04:08.738 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00484749 s, 216 MB/s 00:04:08.738 17:03:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.738 17:03:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.738 17:03:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:08.738 17:03:01 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:08.738 17:03:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:08.738 No valid GPT data, bailing 00:04:08.738 17:03:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:08.738 17:03:01 -- scripts/common.sh@394 -- # pt= 00:04:08.738 17:03:01 -- scripts/common.sh@395 -- # return 1 00:04:08.738 17:03:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:08.738 1+0 records in 00:04:08.738 1+0 records out 00:04:08.738 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00454401 s, 231 MB/s 00:04:08.738 17:03:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.738 17:03:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.738 17:03:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:08.738 17:03:01 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:08.738 17:03:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:08.738 No valid GPT data, bailing 00:04:08.738 17:03:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:08.738 17:03:01 -- scripts/common.sh@394 -- # pt= 00:04:08.738 17:03:01 -- scripts/common.sh@395 -- # return 1 00:04:08.738 17:03:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:08.738 1+0 records in 00:04:08.738 1+0 records out 00:04:08.738 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00447946 s, 234 MB/s 00:04:08.738 17:03:01 -- spdk/autotest.sh@105 -- # sync 00:04:08.738 17:03:01 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:08.738 17:03:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:08.738 17:03:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:11.270 17:03:03 -- spdk/autotest.sh@111 -- # uname -s 00:04:11.270 17:03:03 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:11.270 17:03:03 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:11.270 17:03:03 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:11.528 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.528 Hugepages 00:04:11.528 node hugesize free / total 00:04:11.528 node0 1048576kB 0 / 0 00:04:11.528 node0 2048kB 0 / 0 00:04:11.528 00:04:11.528 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:11.528 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:11.787 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:11.787 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:11.787 17:03:04 -- spdk/autotest.sh@117 -- # uname -s 00:04:11.787 17:03:04 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:11.787 17:03:04 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:11.787 17:03:04 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:12.354 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.613 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.613 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.613 17:03:05 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:13.548 17:03:06 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:13.548 17:03:06 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:13.548 17:03:06 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:13.548 17:03:06 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:13.548 17:03:06 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:13.548 17:03:06 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:13.548 17:03:06 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:13.548 17:03:06 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:13.548 17:03:06 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:13.805 17:03:06 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:13.805 17:03:06 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:13.805 17:03:06 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:14.063 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:14.063 Waiting for block devices as requested 00:04:14.063 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:14.063 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:14.321 17:03:06 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:14.321 17:03:06 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:14.321 17:03:06 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:14.321 17:03:06 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:14.321 17:03:06 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:14.321 17:03:06 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:14.321 17:03:06 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:14.321 17:03:06 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:14.321 17:03:06 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:14.321 17:03:06 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:14.321 17:03:06 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:14.321 17:03:06 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:14.321 17:03:06 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:14.321 17:03:06 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:14.321 17:03:06 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:14.321 17:03:06 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:14.321 17:03:06 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:14.321 17:03:06 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:14.321 17:03:06 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:14.321 17:03:06 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:14.321 17:03:06 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:14.321 17:03:06 -- common/autotest_common.sh@1543 -- # continue 00:04:14.321 17:03:06 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:14.321 17:03:06 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:14.321 17:03:06 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:14.321 17:03:06 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:14.321 17:03:06 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:14.321 17:03:06 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:14.321 17:03:06 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:14.321 17:03:06 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:14.321 17:03:06 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:14.321 17:03:06 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:14.321 17:03:06 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:14.321 17:03:06 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:14.321 17:03:06 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:14.321 17:03:06 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:14.321 17:03:06 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:14.321 17:03:06 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:14.321 17:03:06 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:14.321 17:03:06 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:14.321 17:03:06 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:14.321 17:03:06 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:14.321 17:03:06 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:14.321 17:03:06 -- common/autotest_common.sh@1543 -- # continue 00:04:14.321 17:03:06 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:14.321 17:03:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:14.321 17:03:06 -- common/autotest_common.sh@10 -- # set +x 00:04:14.321 17:03:06 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:14.321 17:03:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:14.321 17:03:06 -- common/autotest_common.sh@10 -- # set +x 00:04:14.321 17:03:06 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:14.887 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:15.144 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:15.144 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:15.144 17:03:07 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:15.144 17:03:07 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:15.144 17:03:07 -- common/autotest_common.sh@10 -- # set +x 00:04:15.144 17:03:07 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:15.144 17:03:07 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:15.144 17:03:07 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:15.144 17:03:07 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:15.144 17:03:07 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:15.144 17:03:07 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:15.144 17:03:07 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:15.144 17:03:07 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:15.144 17:03:07 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:15.144 17:03:07 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:15.144 17:03:07 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:15.144 17:03:07 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:15.144 17:03:07 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:15.402 17:03:07 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:15.402 17:03:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:15.402 17:03:07 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:15.402 17:03:07 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:15.402 17:03:07 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:15.402 17:03:07 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:15.402 17:03:07 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:15.402 17:03:07 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:15.402 17:03:07 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:15.402 17:03:07 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:15.402 17:03:07 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:15.402 17:03:07 -- common/autotest_common.sh@1572 -- # return 0 00:04:15.402 17:03:07 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:15.402 17:03:07 -- common/autotest_common.sh@1580 -- # return 0 00:04:15.402 17:03:07 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:15.402 17:03:07 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:15.402 17:03:07 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:15.403 17:03:07 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:15.403 17:03:07 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:15.403 17:03:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:15.403 17:03:07 -- common/autotest_common.sh@10 -- # set +x 00:04:15.403 17:03:07 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:15.403 17:03:07 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:15.403 17:03:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.403 17:03:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.403 17:03:07 -- common/autotest_common.sh@10 -- # set +x 00:04:15.403 ************************************ 00:04:15.403 START TEST env 00:04:15.403 ************************************ 00:04:15.403 17:03:07 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:15.403 * Looking for test storage... 00:04:15.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:15.403 17:03:07 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:15.403 17:03:07 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:15.403 17:03:07 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:15.403 17:03:07 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:15.403 17:03:07 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.403 17:03:07 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.403 17:03:07 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.403 17:03:07 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.403 17:03:07 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.403 17:03:07 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.403 17:03:07 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.403 17:03:07 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.403 17:03:07 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.403 17:03:07 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.403 17:03:07 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.403 17:03:07 env -- scripts/common.sh@344 -- # case "$op" in 00:04:15.403 17:03:07 env -- scripts/common.sh@345 -- # : 1 00:04:15.403 17:03:07 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.403 17:03:07 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.403 17:03:07 env -- scripts/common.sh@365 -- # decimal 1 00:04:15.403 17:03:07 env -- scripts/common.sh@353 -- # local d=1 00:04:15.403 17:03:07 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.403 17:03:07 env -- scripts/common.sh@355 -- # echo 1 00:04:15.403 17:03:07 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.403 17:03:07 env -- scripts/common.sh@366 -- # decimal 2 00:04:15.403 17:03:07 env -- scripts/common.sh@353 -- # local d=2 00:04:15.403 17:03:07 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.403 17:03:07 env -- scripts/common.sh@355 -- # echo 2 00:04:15.403 17:03:07 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.403 17:03:07 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.403 17:03:07 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.403 17:03:07 env -- scripts/common.sh@368 -- # return 0 00:04:15.403 17:03:07 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.403 17:03:07 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:15.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.403 --rc genhtml_branch_coverage=1 00:04:15.403 --rc genhtml_function_coverage=1 00:04:15.403 --rc genhtml_legend=1 00:04:15.403 --rc geninfo_all_blocks=1 00:04:15.403 --rc geninfo_unexecuted_blocks=1 00:04:15.403 00:04:15.403 ' 00:04:15.403 17:03:07 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:15.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.403 --rc genhtml_branch_coverage=1 00:04:15.403 --rc genhtml_function_coverage=1 00:04:15.403 --rc genhtml_legend=1 00:04:15.403 --rc geninfo_all_blocks=1 00:04:15.403 --rc geninfo_unexecuted_blocks=1 00:04:15.403 00:04:15.403 ' 00:04:15.403 17:03:07 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:15.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.403 --rc genhtml_branch_coverage=1 00:04:15.403 --rc genhtml_function_coverage=1 00:04:15.403 --rc genhtml_legend=1 00:04:15.403 --rc geninfo_all_blocks=1 00:04:15.403 --rc geninfo_unexecuted_blocks=1 00:04:15.403 00:04:15.403 ' 00:04:15.403 17:03:07 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:15.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.403 --rc genhtml_branch_coverage=1 00:04:15.403 --rc genhtml_function_coverage=1 00:04:15.403 --rc genhtml_legend=1 00:04:15.403 --rc geninfo_all_blocks=1 00:04:15.403 --rc geninfo_unexecuted_blocks=1 00:04:15.403 00:04:15.403 ' 00:04:15.403 17:03:07 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:15.403 17:03:07 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.403 17:03:07 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.403 17:03:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.662 ************************************ 00:04:15.662 START TEST env_memory 00:04:15.662 ************************************ 00:04:15.662 17:03:07 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:15.662 00:04:15.662 00:04:15.662 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.662 http://cunit.sourceforge.net/ 00:04:15.662 00:04:15.662 00:04:15.662 Suite: memory 00:04:15.662 Test: alloc and free memory map ...[2024-11-19 17:03:07.984365] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:15.662 passed 00:04:15.662 Test: mem map translation ...[2024-11-19 17:03:08.016101] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:15.662 [2024-11-19 17:03:08.016380] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:15.662 [2024-11-19 17:03:08.016588] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:15.662 [2024-11-19 17:03:08.016881] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:15.662 passed 00:04:15.662 Test: mem map registration ...[2024-11-19 17:03:08.081844] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:15.662 [2024-11-19 17:03:08.082116] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:15.662 passed 00:04:15.662 Test: mem map adjacent registrations ...passed 00:04:15.662 00:04:15.662 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.662 suites 1 1 n/a 0 0 00:04:15.662 tests 4 4 4 0 0 00:04:15.662 asserts 152 152 152 0 n/a 00:04:15.662 00:04:15.662 Elapsed time = 0.215 seconds 00:04:15.662 00:04:15.662 real 0m0.236s 00:04:15.662 user 0m0.212s 00:04:15.662 sys 0m0.017s 00:04:15.662 ************************************ 00:04:15.662 END TEST env_memory 00:04:15.662 ************************************ 00:04:15.662 17:03:08 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.662 17:03:08 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:15.921 17:03:08 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:15.921 17:03:08 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.921 17:03:08 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.921 17:03:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.921 ************************************ 00:04:15.921 START TEST env_vtophys 00:04:15.921 ************************************ 00:04:15.921 17:03:08 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:15.921 EAL: lib.eal log level changed from notice to debug 00:04:15.921 EAL: Detected lcore 0 as core 0 on socket 0 00:04:15.921 EAL: Detected lcore 1 as core 0 on socket 0 00:04:15.921 EAL: Detected lcore 2 as core 0 on socket 0 00:04:15.921 EAL: Detected lcore 3 as core 0 on socket 0 00:04:15.921 EAL: Detected lcore 4 as core 0 on socket 0 00:04:15.921 EAL: Detected lcore 5 as core 0 on socket 0 00:04:15.921 EAL: Detected lcore 6 as core 0 on socket 0 00:04:15.921 EAL: Detected lcore 7 as core 0 on socket 0 00:04:15.921 EAL: Detected lcore 8 as core 0 on socket 0 00:04:15.921 EAL: Detected lcore 9 as core 0 on socket 0 00:04:15.921 EAL: Maximum logical cores by configuration: 128 00:04:15.921 EAL: Detected CPU lcores: 10 00:04:15.921 EAL: Detected NUMA nodes: 1 00:04:15.921 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:15.921 EAL: Detected shared linkage of DPDK 00:04:15.921 EAL: No shared files mode enabled, IPC will be disabled 00:04:15.921 EAL: Selected IOVA mode 'PA' 00:04:15.921 EAL: Probing VFIO support... 00:04:15.921 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:15.921 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:15.921 EAL: Ask a virtual area of 0x2e000 bytes 00:04:15.921 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:15.921 EAL: Setting up physically contiguous memory... 00:04:15.921 EAL: Setting maximum number of open files to 524288 00:04:15.921 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:15.921 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:15.921 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.921 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:15.921 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.921 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.921 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:15.921 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:15.921 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.921 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:15.921 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.921 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.921 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:15.921 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:15.921 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.921 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:15.921 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.921 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.921 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:15.921 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:15.921 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.921 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:15.921 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.922 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.922 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:15.922 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:15.922 EAL: Hugepages will be freed exactly as allocated. 00:04:15.922 EAL: No shared files mode enabled, IPC is disabled 00:04:15.922 EAL: No shared files mode enabled, IPC is disabled 00:04:15.922 EAL: TSC frequency is ~2200000 KHz 00:04:15.922 EAL: Main lcore 0 is ready (tid=7f014f01fa00;cpuset=[0]) 00:04:15.922 EAL: Trying to obtain current memory policy. 00:04:15.922 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.922 EAL: Restoring previous memory policy: 0 00:04:15.922 EAL: request: mp_malloc_sync 00:04:15.922 EAL: No shared files mode enabled, IPC is disabled 00:04:15.922 EAL: Heap on socket 0 was expanded by 2MB 00:04:15.922 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:15.922 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:15.922 EAL: Mem event callback 'spdk:(nil)' registered 00:04:15.922 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:15.922 00:04:15.922 00:04:15.922 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.922 http://cunit.sourceforge.net/ 00:04:15.922 00:04:15.922 00:04:15.922 Suite: components_suite 00:04:15.922 Test: vtophys_malloc_test ...passed 00:04:15.922 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:15.922 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.922 EAL: Restoring previous memory policy: 4 00:04:15.922 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.922 EAL: request: mp_malloc_sync 00:04:15.922 EAL: No shared files mode enabled, IPC is disabled 00:04:15.922 EAL: Heap on socket 0 was expanded by 4MB 00:04:15.922 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.922 EAL: request: mp_malloc_sync 00:04:15.922 EAL: No shared files mode enabled, IPC is disabled 00:04:15.922 EAL: Heap on socket 0 was shrunk by 4MB 00:04:15.922 EAL: Trying to obtain current memory policy. 00:04:15.922 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.922 EAL: Restoring previous memory policy: 4 00:04:15.922 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.922 EAL: request: mp_malloc_sync 00:04:15.922 EAL: No shared files mode enabled, IPC is disabled 00:04:15.922 EAL: Heap on socket 0 was expanded by 6MB 00:04:15.922 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.922 EAL: request: mp_malloc_sync 00:04:15.922 EAL: No shared files mode enabled, IPC is disabled 00:04:15.922 EAL: Heap on socket 0 was shrunk by 6MB 00:04:15.922 EAL: Trying to obtain current memory policy. 00:04:15.922 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.922 EAL: Restoring previous memory policy: 4 00:04:15.922 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.922 EAL: request: mp_malloc_sync 00:04:15.922 EAL: No shared files mode enabled, IPC is disabled 00:04:15.922 EAL: Heap on socket 0 was expanded by 10MB 00:04:15.922 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.922 EAL: request: mp_malloc_sync 00:04:15.922 EAL: No shared files mode enabled, IPC is disabled 00:04:15.922 EAL: Heap on socket 0 was shrunk by 10MB 00:04:15.922 EAL: Trying to obtain current memory policy. 00:04:15.922 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.922 EAL: Restoring previous memory policy: 4 00:04:15.922 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.922 EAL: request: mp_malloc_sync 00:04:15.922 EAL: No shared files mode enabled, IPC is disabled 00:04:15.922 EAL: Heap on socket 0 was expanded by 18MB 00:04:15.922 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.922 EAL: request: mp_malloc_sync 00:04:15.922 EAL: No shared files mode enabled, IPC is disabled 00:04:15.922 EAL: Heap on socket 0 was shrunk by 18MB 00:04:15.922 EAL: Trying to obtain current memory policy. 00:04:15.922 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.922 EAL: Restoring previous memory policy: 4 00:04:15.922 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.922 EAL: request: mp_malloc_sync 00:04:15.922 EAL: No shared files mode enabled, IPC is disabled 00:04:15.922 EAL: Heap on socket 0 was expanded by 34MB 00:04:15.922 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.922 EAL: request: mp_malloc_sync 00:04:15.922 EAL: No shared files mode enabled, IPC is disabled 00:04:15.922 EAL: Heap on socket 0 was shrunk by 34MB 00:04:15.922 EAL: Trying to obtain current memory policy. 00:04:15.922 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.922 EAL: Restoring previous memory policy: 4 00:04:15.922 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.922 EAL: request: mp_malloc_sync 00:04:15.922 EAL: No shared files mode enabled, IPC is disabled 00:04:15.922 EAL: Heap on socket 0 was expanded by 66MB 00:04:15.922 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.181 EAL: request: mp_malloc_sync 00:04:16.181 EAL: No shared files mode enabled, IPC is disabled 00:04:16.181 EAL: Heap on socket 0 was shrunk by 66MB 00:04:16.181 EAL: Trying to obtain current memory policy. 00:04:16.181 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.181 EAL: Restoring previous memory policy: 4 00:04:16.181 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.181 EAL: request: mp_malloc_sync 00:04:16.181 EAL: No shared files mode enabled, IPC is disabled 00:04:16.181 EAL: Heap on socket 0 was expanded by 130MB 00:04:16.181 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.181 EAL: request: mp_malloc_sync 00:04:16.181 EAL: No shared files mode enabled, IPC is disabled 00:04:16.181 EAL: Heap on socket 0 was shrunk by 130MB 00:04:16.181 EAL: Trying to obtain current memory policy. 00:04:16.181 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.181 EAL: Restoring previous memory policy: 4 00:04:16.181 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.181 EAL: request: mp_malloc_sync 00:04:16.181 EAL: No shared files mode enabled, IPC is disabled 00:04:16.181 EAL: Heap on socket 0 was expanded by 258MB 00:04:16.181 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.181 EAL: request: mp_malloc_sync 00:04:16.181 EAL: No shared files mode enabled, IPC is disabled 00:04:16.181 EAL: Heap on socket 0 was shrunk by 258MB 00:04:16.181 EAL: Trying to obtain current memory policy. 00:04:16.181 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.440 EAL: Restoring previous memory policy: 4 00:04:16.440 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.440 EAL: request: mp_malloc_sync 00:04:16.440 EAL: No shared files mode enabled, IPC is disabled 00:04:16.440 EAL: Heap on socket 0 was expanded by 514MB 00:04:16.440 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.699 EAL: request: mp_malloc_sync 00:04:16.699 EAL: No shared files mode enabled, IPC is disabled 00:04:16.699 EAL: Heap on socket 0 was shrunk by 514MB 00:04:16.699 EAL: Trying to obtain current memory policy. 00:04:16.699 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.958 EAL: Restoring previous memory policy: 4 00:04:16.958 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.958 EAL: request: mp_malloc_sync 00:04:16.958 EAL: No shared files mode enabled, IPC is disabled 00:04:16.958 EAL: Heap on socket 0 was expanded by 1026MB 00:04:17.216 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.216 passed 00:04:17.216 00:04:17.216 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.216 suites 1 1 n/a 0 0 00:04:17.216 tests 2 2 2 0 0 00:04:17.216 asserts 5358 5358 5358 0 n/a 00:04:17.216 00:04:17.216 Elapsed time = 1.257 seconds 00:04:17.216 EAL: request: mp_malloc_sync 00:04:17.216 EAL: No shared files mode enabled, IPC is disabled 00:04:17.216 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:17.216 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.216 EAL: request: mp_malloc_sync 00:04:17.216 EAL: No shared files mode enabled, IPC is disabled 00:04:17.216 EAL: Heap on socket 0 was shrunk by 2MB 00:04:17.216 EAL: No shared files mode enabled, IPC is disabled 00:04:17.216 EAL: No shared files mode enabled, IPC is disabled 00:04:17.216 EAL: No shared files mode enabled, IPC is disabled 00:04:17.216 00:04:17.216 real 0m1.472s 00:04:17.216 user 0m0.797s 00:04:17.216 sys 0m0.535s 00:04:17.216 17:03:09 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.216 ************************************ 00:04:17.216 END TEST env_vtophys 00:04:17.216 ************************************ 00:04:17.216 17:03:09 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:17.216 17:03:09 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:17.216 17:03:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.216 17:03:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.216 17:03:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.216 ************************************ 00:04:17.216 START TEST env_pci 00:04:17.216 ************************************ 00:04:17.216 17:03:09 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:17.474 00:04:17.474 00:04:17.474 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.474 http://cunit.sourceforge.net/ 00:04:17.474 00:04:17.474 00:04:17.474 Suite: pci 00:04:17.474 Test: pci_hook ...[2024-11-19 17:03:09.759688] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58364 has claimed it 00:04:17.474 passed 00:04:17.474 00:04:17.474 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.474 suites 1 1 n/a 0 0 00:04:17.474 tests 1 1 1 0 0 00:04:17.474 asserts 25 25 25 0 n/a 00:04:17.474 00:04:17.474 Elapsed time = 0.002 seconds 00:04:17.474 EAL: Cannot find device (10000:00:01.0) 00:04:17.474 EAL: Failed to attach device on primary process 00:04:17.474 00:04:17.474 real 0m0.019s 00:04:17.474 user 0m0.009s 00:04:17.474 sys 0m0.011s 00:04:17.474 17:03:09 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.474 ************************************ 00:04:17.474 END TEST env_pci 00:04:17.474 ************************************ 00:04:17.474 17:03:09 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:17.474 17:03:09 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:17.474 17:03:09 env -- env/env.sh@15 -- # uname 00:04:17.474 17:03:09 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:17.474 17:03:09 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:17.474 17:03:09 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:17.474 17:03:09 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:17.474 17:03:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.474 17:03:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.474 ************************************ 00:04:17.474 START TEST env_dpdk_post_init 00:04:17.474 ************************************ 00:04:17.474 17:03:09 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:17.474 EAL: Detected CPU lcores: 10 00:04:17.474 EAL: Detected NUMA nodes: 1 00:04:17.474 EAL: Detected shared linkage of DPDK 00:04:17.474 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:17.474 EAL: Selected IOVA mode 'PA' 00:04:17.474 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:17.474 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:17.474 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:17.734 Starting DPDK initialization... 00:04:17.734 Starting SPDK post initialization... 00:04:17.734 SPDK NVMe probe 00:04:17.734 Attaching to 0000:00:10.0 00:04:17.734 Attaching to 0000:00:11.0 00:04:17.734 Attached to 0000:00:10.0 00:04:17.734 Attached to 0000:00:11.0 00:04:17.734 Cleaning up... 00:04:17.734 00:04:17.734 real 0m0.186s 00:04:17.734 user 0m0.056s 00:04:17.734 sys 0m0.030s 00:04:17.734 17:03:10 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.734 ************************************ 00:04:17.734 END TEST env_dpdk_post_init 00:04:17.734 ************************************ 00:04:17.734 17:03:10 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:17.734 17:03:10 env -- env/env.sh@26 -- # uname 00:04:17.734 17:03:10 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:17.734 17:03:10 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:17.734 17:03:10 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.734 17:03:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.734 17:03:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.734 ************************************ 00:04:17.734 START TEST env_mem_callbacks 00:04:17.734 ************************************ 00:04:17.734 17:03:10 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:17.734 EAL: Detected CPU lcores: 10 00:04:17.734 EAL: Detected NUMA nodes: 1 00:04:17.734 EAL: Detected shared linkage of DPDK 00:04:17.734 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:17.734 EAL: Selected IOVA mode 'PA' 00:04:17.734 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:17.734 00:04:17.734 00:04:17.734 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.734 http://cunit.sourceforge.net/ 00:04:17.734 00:04:17.734 00:04:17.734 Suite: memory 00:04:17.734 Test: test ... 00:04:17.734 register 0x200000200000 2097152 00:04:17.734 malloc 3145728 00:04:17.734 register 0x200000400000 4194304 00:04:17.734 buf 0x200000500000 len 3145728 PASSED 00:04:17.734 malloc 64 00:04:17.734 buf 0x2000004fff40 len 64 PASSED 00:04:17.734 malloc 4194304 00:04:17.734 register 0x200000800000 6291456 00:04:17.734 buf 0x200000a00000 len 4194304 PASSED 00:04:17.734 free 0x200000500000 3145728 00:04:17.734 free 0x2000004fff40 64 00:04:17.734 unregister 0x200000400000 4194304 PASSED 00:04:17.734 free 0x200000a00000 4194304 00:04:17.734 unregister 0x200000800000 6291456 PASSED 00:04:17.734 malloc 8388608 00:04:17.734 register 0x200000400000 10485760 00:04:17.734 buf 0x200000600000 len 8388608 PASSED 00:04:17.734 free 0x200000600000 8388608 00:04:17.734 unregister 0x200000400000 10485760 PASSED 00:04:17.734 passed 00:04:17.734 00:04:17.734 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.734 suites 1 1 n/a 0 0 00:04:17.734 tests 1 1 1 0 0 00:04:17.734 asserts 15 15 15 0 n/a 00:04:17.734 00:04:17.734 Elapsed time = 0.011 seconds 00:04:17.734 00:04:17.734 real 0m0.149s 00:04:17.734 user 0m0.019s 00:04:17.734 sys 0m0.028s 00:04:17.734 17:03:10 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.734 17:03:10 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:17.734 ************************************ 00:04:17.734 END TEST env_mem_callbacks 00:04:17.734 ************************************ 00:04:17.734 00:04:17.734 real 0m2.512s 00:04:17.734 user 0m1.287s 00:04:17.734 sys 0m0.867s 00:04:17.734 17:03:10 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.734 ************************************ 00:04:17.734 END TEST env 00:04:17.734 ************************************ 00:04:17.734 17:03:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.993 17:03:10 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:17.993 17:03:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.993 17:03:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.993 17:03:10 -- common/autotest_common.sh@10 -- # set +x 00:04:17.993 ************************************ 00:04:17.993 START TEST rpc 00:04:17.993 ************************************ 00:04:17.993 17:03:10 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:17.993 * Looking for test storage... 00:04:17.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:17.993 17:03:10 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:17.993 17:03:10 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:17.993 17:03:10 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:17.993 17:03:10 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:17.993 17:03:10 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:17.993 17:03:10 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:17.993 17:03:10 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:17.993 17:03:10 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.993 17:03:10 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:17.993 17:03:10 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:17.993 17:03:10 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:17.993 17:03:10 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:17.993 17:03:10 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:17.993 17:03:10 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:17.993 17:03:10 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:17.993 17:03:10 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:17.993 17:03:10 rpc -- scripts/common.sh@345 -- # : 1 00:04:17.993 17:03:10 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:17.993 17:03:10 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.993 17:03:10 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:17.993 17:03:10 rpc -- scripts/common.sh@353 -- # local d=1 00:04:17.993 17:03:10 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.993 17:03:10 rpc -- scripts/common.sh@355 -- # echo 1 00:04:17.993 17:03:10 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:17.993 17:03:10 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:17.993 17:03:10 rpc -- scripts/common.sh@353 -- # local d=2 00:04:17.993 17:03:10 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.993 17:03:10 rpc -- scripts/common.sh@355 -- # echo 2 00:04:17.993 17:03:10 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:17.993 17:03:10 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:17.993 17:03:10 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:17.993 17:03:10 rpc -- scripts/common.sh@368 -- # return 0 00:04:17.993 17:03:10 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.993 17:03:10 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:17.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.993 --rc genhtml_branch_coverage=1 00:04:17.993 --rc genhtml_function_coverage=1 00:04:17.993 --rc genhtml_legend=1 00:04:17.993 --rc geninfo_all_blocks=1 00:04:17.993 --rc geninfo_unexecuted_blocks=1 00:04:17.993 00:04:17.993 ' 00:04:17.993 17:03:10 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:17.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.993 --rc genhtml_branch_coverage=1 00:04:17.993 --rc genhtml_function_coverage=1 00:04:17.993 --rc genhtml_legend=1 00:04:17.993 --rc geninfo_all_blocks=1 00:04:17.993 --rc geninfo_unexecuted_blocks=1 00:04:17.993 00:04:17.993 ' 00:04:17.993 17:03:10 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:17.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.993 --rc genhtml_branch_coverage=1 00:04:17.993 --rc genhtml_function_coverage=1 00:04:17.993 --rc genhtml_legend=1 00:04:17.993 --rc geninfo_all_blocks=1 00:04:17.993 --rc geninfo_unexecuted_blocks=1 00:04:17.993 00:04:17.993 ' 00:04:17.993 17:03:10 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:17.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.993 --rc genhtml_branch_coverage=1 00:04:17.993 --rc genhtml_function_coverage=1 00:04:17.993 --rc genhtml_legend=1 00:04:17.993 --rc geninfo_all_blocks=1 00:04:17.993 --rc geninfo_unexecuted_blocks=1 00:04:17.993 00:04:17.993 ' 00:04:17.993 17:03:10 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58482 00:04:17.993 17:03:10 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:17.993 17:03:10 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:17.993 17:03:10 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58482 00:04:17.993 17:03:10 rpc -- common/autotest_common.sh@835 -- # '[' -z 58482 ']' 00:04:17.993 17:03:10 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.993 17:03:10 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:17.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.993 17:03:10 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.993 17:03:10 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:17.993 17:03:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.252 [2024-11-19 17:03:10.576769] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:18.252 [2024-11-19 17:03:10.576912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58482 ] 00:04:18.252 [2024-11-19 17:03:10.727694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.523 [2024-11-19 17:03:10.811142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:18.523 [2024-11-19 17:03:10.811231] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58482' to capture a snapshot of events at runtime. 00:04:18.523 [2024-11-19 17:03:10.811253] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:18.523 [2024-11-19 17:03:10.811268] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:18.523 [2024-11-19 17:03:10.811280] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58482 for offline analysis/debug. 00:04:18.523 [2024-11-19 17:03:10.811820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.129 17:03:11 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:19.129 17:03:11 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:19.129 17:03:11 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:19.129 17:03:11 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:19.129 17:03:11 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:19.129 17:03:11 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:19.129 17:03:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.129 17:03:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.129 17:03:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.129 ************************************ 00:04:19.129 START TEST rpc_integrity 00:04:19.129 ************************************ 00:04:19.129 17:03:11 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:19.129 17:03:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:19.129 17:03:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.129 17:03:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.129 17:03:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.129 17:03:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:19.129 17:03:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:19.388 17:03:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:19.388 17:03:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:19.388 17:03:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.388 17:03:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.388 17:03:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.388 17:03:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:19.388 17:03:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:19.388 17:03:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.388 17:03:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.388 17:03:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.388 17:03:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:19.388 { 00:04:19.388 "aliases": [ 00:04:19.388 "608f8537-d6b7-43c3-9423-943312af6b53" 00:04:19.388 ], 00:04:19.388 "assigned_rate_limits": { 00:04:19.388 "r_mbytes_per_sec": 0, 00:04:19.388 "rw_ios_per_sec": 0, 00:04:19.388 "rw_mbytes_per_sec": 0, 00:04:19.388 "w_mbytes_per_sec": 0 00:04:19.388 }, 00:04:19.388 "block_size": 512, 00:04:19.388 "claimed": false, 00:04:19.388 "driver_specific": {}, 00:04:19.388 "memory_domains": [ 00:04:19.388 { 00:04:19.388 "dma_device_id": "system", 00:04:19.388 "dma_device_type": 1 00:04:19.388 }, 00:04:19.388 { 00:04:19.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.388 "dma_device_type": 2 00:04:19.388 } 00:04:19.388 ], 00:04:19.388 "name": "Malloc0", 00:04:19.388 "num_blocks": 16384, 00:04:19.388 "product_name": "Malloc disk", 00:04:19.388 "supported_io_types": { 00:04:19.388 "abort": true, 00:04:19.388 "compare": false, 00:04:19.388 "compare_and_write": false, 00:04:19.388 "copy": true, 00:04:19.388 "flush": true, 00:04:19.388 "get_zone_info": false, 00:04:19.388 "nvme_admin": false, 00:04:19.388 "nvme_io": false, 00:04:19.388 "nvme_io_md": false, 00:04:19.388 "nvme_iov_md": false, 00:04:19.388 "read": true, 00:04:19.388 "reset": true, 00:04:19.388 "seek_data": false, 00:04:19.388 "seek_hole": false, 00:04:19.388 "unmap": true, 00:04:19.388 "write": true, 00:04:19.388 "write_zeroes": true, 00:04:19.388 "zcopy": true, 00:04:19.388 "zone_append": false, 00:04:19.388 "zone_management": false 00:04:19.388 }, 00:04:19.388 "uuid": "608f8537-d6b7-43c3-9423-943312af6b53", 00:04:19.388 "zoned": false 00:04:19.388 } 00:04:19.388 ]' 00:04:19.388 17:03:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:19.388 17:03:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:19.388 17:03:11 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:19.388 17:03:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.388 17:03:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.389 [2024-11-19 17:03:11.789306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:19.389 [2024-11-19 17:03:11.789494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:19.389 [2024-11-19 17:03:11.789527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x163eba0 00:04:19.389 [2024-11-19 17:03:11.789537] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:19.389 [2024-11-19 17:03:11.791267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:19.389 [2024-11-19 17:03:11.791304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:19.389 Passthru0 00:04:19.389 17:03:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.389 17:03:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:19.389 17:03:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.389 17:03:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.389 17:03:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.389 17:03:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:19.389 { 00:04:19.389 "aliases": [ 00:04:19.389 "608f8537-d6b7-43c3-9423-943312af6b53" 00:04:19.389 ], 00:04:19.389 "assigned_rate_limits": { 00:04:19.389 "r_mbytes_per_sec": 0, 00:04:19.389 "rw_ios_per_sec": 0, 00:04:19.389 "rw_mbytes_per_sec": 0, 00:04:19.389 "w_mbytes_per_sec": 0 00:04:19.389 }, 00:04:19.389 "block_size": 512, 00:04:19.389 "claim_type": "exclusive_write", 00:04:19.389 "claimed": true, 00:04:19.389 "driver_specific": {}, 00:04:19.389 "memory_domains": [ 00:04:19.389 { 00:04:19.389 "dma_device_id": "system", 00:04:19.389 "dma_device_type": 1 00:04:19.389 }, 00:04:19.389 { 00:04:19.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.389 "dma_device_type": 2 00:04:19.389 } 00:04:19.389 ], 00:04:19.389 "name": "Malloc0", 00:04:19.389 "num_blocks": 16384, 00:04:19.389 "product_name": "Malloc disk", 00:04:19.389 "supported_io_types": { 00:04:19.389 "abort": true, 00:04:19.389 "compare": false, 00:04:19.389 "compare_and_write": false, 00:04:19.389 "copy": true, 00:04:19.389 "flush": true, 00:04:19.389 "get_zone_info": false, 00:04:19.389 "nvme_admin": false, 00:04:19.389 "nvme_io": false, 00:04:19.389 "nvme_io_md": false, 00:04:19.389 "nvme_iov_md": false, 00:04:19.389 "read": true, 00:04:19.389 "reset": true, 00:04:19.389 "seek_data": false, 00:04:19.389 "seek_hole": false, 00:04:19.389 "unmap": true, 00:04:19.389 "write": true, 00:04:19.389 "write_zeroes": true, 00:04:19.389 "zcopy": true, 00:04:19.389 "zone_append": false, 00:04:19.389 "zone_management": false 00:04:19.389 }, 00:04:19.389 "uuid": "608f8537-d6b7-43c3-9423-943312af6b53", 00:04:19.389 "zoned": false 00:04:19.389 }, 00:04:19.389 { 00:04:19.389 "aliases": [ 00:04:19.389 "03219219-bc89-5aeb-9c74-09b1cd618409" 00:04:19.389 ], 00:04:19.389 "assigned_rate_limits": { 00:04:19.389 "r_mbytes_per_sec": 0, 00:04:19.389 "rw_ios_per_sec": 0, 00:04:19.389 "rw_mbytes_per_sec": 0, 00:04:19.389 "w_mbytes_per_sec": 0 00:04:19.389 }, 00:04:19.389 "block_size": 512, 00:04:19.389 "claimed": false, 00:04:19.389 "driver_specific": { 00:04:19.389 "passthru": { 00:04:19.389 "base_bdev_name": "Malloc0", 00:04:19.389 "name": "Passthru0" 00:04:19.389 } 00:04:19.389 }, 00:04:19.389 "memory_domains": [ 00:04:19.389 { 00:04:19.389 "dma_device_id": "system", 00:04:19.389 "dma_device_type": 1 00:04:19.389 }, 00:04:19.389 { 00:04:19.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.389 "dma_device_type": 2 00:04:19.389 } 00:04:19.389 ], 00:04:19.389 "name": "Passthru0", 00:04:19.389 "num_blocks": 16384, 00:04:19.389 "product_name": "passthru", 00:04:19.389 "supported_io_types": { 00:04:19.389 "abort": true, 00:04:19.389 "compare": false, 00:04:19.389 "compare_and_write": false, 00:04:19.389 "copy": true, 00:04:19.389 "flush": true, 00:04:19.389 "get_zone_info": false, 00:04:19.389 "nvme_admin": false, 00:04:19.389 "nvme_io": false, 00:04:19.389 "nvme_io_md": false, 00:04:19.389 "nvme_iov_md": false, 00:04:19.389 "read": true, 00:04:19.389 "reset": true, 00:04:19.389 "seek_data": false, 00:04:19.389 "seek_hole": false, 00:04:19.389 "unmap": true, 00:04:19.389 "write": true, 00:04:19.389 "write_zeroes": true, 00:04:19.389 "zcopy": true, 00:04:19.389 "zone_append": false, 00:04:19.389 "zone_management": false 00:04:19.389 }, 00:04:19.389 "uuid": "03219219-bc89-5aeb-9c74-09b1cd618409", 00:04:19.389 "zoned": false 00:04:19.389 } 00:04:19.389 ]' 00:04:19.389 17:03:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:19.389 17:03:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:19.389 17:03:11 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:19.389 17:03:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.389 17:03:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.389 17:03:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.389 17:03:11 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:19.389 17:03:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.389 17:03:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.389 17:03:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.389 17:03:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:19.389 17:03:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.389 17:03:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.389 17:03:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.389 17:03:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:19.389 17:03:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:19.648 ************************************ 00:04:19.648 END TEST rpc_integrity 00:04:19.648 ************************************ 00:04:19.648 17:03:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:19.648 00:04:19.648 real 0m0.339s 00:04:19.648 user 0m0.230s 00:04:19.648 sys 0m0.033s 00:04:19.648 17:03:11 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.648 17:03:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.648 17:03:12 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:19.648 17:03:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.648 17:03:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.648 17:03:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.648 ************************************ 00:04:19.648 START TEST rpc_plugins 00:04:19.648 ************************************ 00:04:19.648 17:03:12 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:19.648 17:03:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:19.648 17:03:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.648 17:03:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:19.648 17:03:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.648 17:03:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:19.648 17:03:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:19.648 17:03:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.648 17:03:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:19.648 17:03:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.648 17:03:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:19.648 { 00:04:19.648 "aliases": [ 00:04:19.648 "defab596-87c5-46e6-974c-9683b81dc49a" 00:04:19.648 ], 00:04:19.648 "assigned_rate_limits": { 00:04:19.648 "r_mbytes_per_sec": 0, 00:04:19.648 "rw_ios_per_sec": 0, 00:04:19.648 "rw_mbytes_per_sec": 0, 00:04:19.648 "w_mbytes_per_sec": 0 00:04:19.648 }, 00:04:19.648 "block_size": 4096, 00:04:19.648 "claimed": false, 00:04:19.648 "driver_specific": {}, 00:04:19.648 "memory_domains": [ 00:04:19.648 { 00:04:19.648 "dma_device_id": "system", 00:04:19.648 "dma_device_type": 1 00:04:19.648 }, 00:04:19.648 { 00:04:19.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.648 "dma_device_type": 2 00:04:19.648 } 00:04:19.648 ], 00:04:19.648 "name": "Malloc1", 00:04:19.648 "num_blocks": 256, 00:04:19.648 "product_name": "Malloc disk", 00:04:19.648 "supported_io_types": { 00:04:19.648 "abort": true, 00:04:19.648 "compare": false, 00:04:19.648 "compare_and_write": false, 00:04:19.648 "copy": true, 00:04:19.648 "flush": true, 00:04:19.648 "get_zone_info": false, 00:04:19.648 "nvme_admin": false, 00:04:19.648 "nvme_io": false, 00:04:19.648 "nvme_io_md": false, 00:04:19.648 "nvme_iov_md": false, 00:04:19.648 "read": true, 00:04:19.648 "reset": true, 00:04:19.648 "seek_data": false, 00:04:19.648 "seek_hole": false, 00:04:19.648 "unmap": true, 00:04:19.648 "write": true, 00:04:19.648 "write_zeroes": true, 00:04:19.648 "zcopy": true, 00:04:19.648 "zone_append": false, 00:04:19.648 "zone_management": false 00:04:19.648 }, 00:04:19.648 "uuid": "defab596-87c5-46e6-974c-9683b81dc49a", 00:04:19.648 "zoned": false 00:04:19.648 } 00:04:19.648 ]' 00:04:19.648 17:03:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:19.648 17:03:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:19.648 17:03:12 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:19.648 17:03:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.648 17:03:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:19.649 17:03:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.649 17:03:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:19.649 17:03:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.649 17:03:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:19.649 17:03:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.649 17:03:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:19.649 17:03:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:19.649 ************************************ 00:04:19.649 END TEST rpc_plugins 00:04:19.649 ************************************ 00:04:19.649 17:03:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:19.649 00:04:19.649 real 0m0.164s 00:04:19.649 user 0m0.108s 00:04:19.649 sys 0m0.018s 00:04:19.649 17:03:12 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.649 17:03:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:19.907 17:03:12 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:19.907 17:03:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.907 17:03:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.907 17:03:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.907 ************************************ 00:04:19.907 START TEST rpc_trace_cmd_test 00:04:19.907 ************************************ 00:04:19.907 17:03:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:19.907 17:03:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:19.907 17:03:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:19.907 17:03:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.907 17:03:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:19.908 17:03:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.908 17:03:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:19.908 "bdev": { 00:04:19.908 "mask": "0x8", 00:04:19.908 "tpoint_mask": "0xffffffffffffffff" 00:04:19.908 }, 00:04:19.908 "bdev_nvme": { 00:04:19.908 "mask": "0x4000", 00:04:19.908 "tpoint_mask": "0x0" 00:04:19.908 }, 00:04:19.908 "bdev_raid": { 00:04:19.908 "mask": "0x20000", 00:04:19.908 "tpoint_mask": "0x0" 00:04:19.908 }, 00:04:19.908 "blob": { 00:04:19.908 "mask": "0x10000", 00:04:19.908 "tpoint_mask": "0x0" 00:04:19.908 }, 00:04:19.908 "blobfs": { 00:04:19.908 "mask": "0x80", 00:04:19.908 "tpoint_mask": "0x0" 00:04:19.908 }, 00:04:19.908 "dsa": { 00:04:19.908 "mask": "0x200", 00:04:19.908 "tpoint_mask": "0x0" 00:04:19.908 }, 00:04:19.908 "ftl": { 00:04:19.908 "mask": "0x40", 00:04:19.908 "tpoint_mask": "0x0" 00:04:19.908 }, 00:04:19.908 "iaa": { 00:04:19.908 "mask": "0x1000", 00:04:19.908 "tpoint_mask": "0x0" 00:04:19.908 }, 00:04:19.908 "iscsi_conn": { 00:04:19.908 "mask": "0x2", 00:04:19.908 "tpoint_mask": "0x0" 00:04:19.908 }, 00:04:19.908 "nvme_pcie": { 00:04:19.908 "mask": "0x800", 00:04:19.908 "tpoint_mask": "0x0" 00:04:19.908 }, 00:04:19.908 "nvme_tcp": { 00:04:19.908 "mask": "0x2000", 00:04:19.908 "tpoint_mask": "0x0" 00:04:19.908 }, 00:04:19.908 "nvmf_rdma": { 00:04:19.908 "mask": "0x10", 00:04:19.908 "tpoint_mask": "0x0" 00:04:19.908 }, 00:04:19.908 "nvmf_tcp": { 00:04:19.908 "mask": "0x20", 00:04:19.908 "tpoint_mask": "0x0" 00:04:19.908 }, 00:04:19.908 "scheduler": { 00:04:19.908 "mask": "0x40000", 00:04:19.908 "tpoint_mask": "0x0" 00:04:19.908 }, 00:04:19.908 "scsi": { 00:04:19.908 "mask": "0x4", 00:04:19.908 "tpoint_mask": "0x0" 00:04:19.908 }, 00:04:19.908 "sock": { 00:04:19.908 "mask": "0x8000", 00:04:19.908 "tpoint_mask": "0x0" 00:04:19.908 }, 00:04:19.908 "thread": { 00:04:19.908 "mask": "0x400", 00:04:19.908 "tpoint_mask": "0x0" 00:04:19.908 }, 00:04:19.908 "tpoint_group_mask": "0x8", 00:04:19.908 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58482" 00:04:19.908 }' 00:04:19.908 17:03:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:19.908 17:03:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:19.908 17:03:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:19.908 17:03:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:19.908 17:03:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:19.908 17:03:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:19.908 17:03:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:20.168 17:03:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:20.168 17:03:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:20.168 ************************************ 00:04:20.168 END TEST rpc_trace_cmd_test 00:04:20.168 ************************************ 00:04:20.168 17:03:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:20.168 00:04:20.168 real 0m0.275s 00:04:20.168 user 0m0.232s 00:04:20.168 sys 0m0.034s 00:04:20.168 17:03:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.168 17:03:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:20.168 17:03:12 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:04:20.168 17:03:12 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:04:20.168 17:03:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.168 17:03:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.168 17:03:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.168 ************************************ 00:04:20.168 START TEST go_rpc 00:04:20.168 ************************************ 00:04:20.168 17:03:12 rpc.go_rpc -- common/autotest_common.sh@1129 -- # go_rpc 00:04:20.168 17:03:12 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:20.168 17:03:12 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:04:20.168 17:03:12 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:04:20.168 17:03:12 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:04:20.168 17:03:12 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:04:20.168 17:03:12 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.168 17:03:12 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.168 17:03:12 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.168 17:03:12 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:04:20.168 17:03:12 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:20.168 17:03:12 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["edb69920-ad02-401b-9280-323411d74148"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"edb69920-ad02-401b-9280-323411d74148","zoned":false}]' 00:04:20.168 17:03:12 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:04:20.428 17:03:12 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:04:20.428 17:03:12 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:20.428 17:03:12 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.428 17:03:12 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.428 17:03:12 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.428 17:03:12 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:20.428 17:03:12 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:04:20.428 17:03:12 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:04:20.428 ************************************ 00:04:20.428 END TEST go_rpc 00:04:20.428 ************************************ 00:04:20.428 17:03:12 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:04:20.428 00:04:20.428 real 0m0.232s 00:04:20.428 user 0m0.159s 00:04:20.428 sys 0m0.037s 00:04:20.428 17:03:12 rpc.go_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.428 17:03:12 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.428 17:03:12 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:20.428 17:03:12 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:20.428 17:03:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.428 17:03:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.428 17:03:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.428 ************************************ 00:04:20.428 START TEST rpc_daemon_integrity 00:04:20.428 ************************************ 00:04:20.428 17:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:20.428 17:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:20.428 17:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.428 17:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.428 17:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.428 17:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:20.428 17:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:20.428 17:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:20.428 17:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:20.428 17:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.428 17:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.428 17:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.428 17:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:04:20.428 17:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:20.428 17:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.428 17:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.428 17:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.428 17:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:20.428 { 00:04:20.428 "aliases": [ 00:04:20.428 "8f2cf20b-b62e-49ef-93a2-0b5dd588604e" 00:04:20.428 ], 00:04:20.428 "assigned_rate_limits": { 00:04:20.428 "r_mbytes_per_sec": 0, 00:04:20.428 "rw_ios_per_sec": 0, 00:04:20.428 "rw_mbytes_per_sec": 0, 00:04:20.428 "w_mbytes_per_sec": 0 00:04:20.428 }, 00:04:20.428 "block_size": 512, 00:04:20.428 "claimed": false, 00:04:20.428 "driver_specific": {}, 00:04:20.428 "memory_domains": [ 00:04:20.428 { 00:04:20.428 "dma_device_id": "system", 00:04:20.428 "dma_device_type": 1 00:04:20.428 }, 00:04:20.428 { 00:04:20.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.428 "dma_device_type": 2 00:04:20.428 } 00:04:20.428 ], 00:04:20.428 "name": "Malloc3", 00:04:20.428 "num_blocks": 16384, 00:04:20.428 "product_name": "Malloc disk", 00:04:20.428 "supported_io_types": { 00:04:20.428 "abort": true, 00:04:20.428 "compare": false, 00:04:20.428 "compare_and_write": false, 00:04:20.428 "copy": true, 00:04:20.428 "flush": true, 00:04:20.428 "get_zone_info": false, 00:04:20.428 "nvme_admin": false, 00:04:20.428 "nvme_io": false, 00:04:20.428 "nvme_io_md": false, 00:04:20.428 "nvme_iov_md": false, 00:04:20.428 "read": true, 00:04:20.428 "reset": true, 00:04:20.428 "seek_data": false, 00:04:20.428 "seek_hole": false, 00:04:20.428 "unmap": true, 00:04:20.428 "write": true, 00:04:20.428 "write_zeroes": true, 00:04:20.428 "zcopy": true, 00:04:20.428 "zone_append": false, 00:04:20.428 "zone_management": false 00:04:20.428 }, 00:04:20.428 "uuid": "8f2cf20b-b62e-49ef-93a2-0b5dd588604e", 00:04:20.428 "zoned": false 00:04:20.428 } 00:04:20.428 ]' 00:04:20.428 17:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:20.687 17:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:20.687 17:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:04:20.687 17:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.687 17:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.687 [2024-11-19 17:03:12.990749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:20.687 [2024-11-19 17:03:12.990813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:20.687 [2024-11-19 17:03:12.990837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14fd420 00:04:20.687 [2024-11-19 17:03:12.990847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:20.687 [2024-11-19 17:03:12.992536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:20.688 [2024-11-19 17:03:12.992700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:20.688 Passthru0 00:04:20.688 17:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.688 17:03:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:20.688 17:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.688 17:03:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.688 17:03:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.688 17:03:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:20.688 { 00:04:20.688 "aliases": [ 00:04:20.688 "8f2cf20b-b62e-49ef-93a2-0b5dd588604e" 00:04:20.688 ], 00:04:20.688 "assigned_rate_limits": { 00:04:20.688 "r_mbytes_per_sec": 0, 00:04:20.688 "rw_ios_per_sec": 0, 00:04:20.688 "rw_mbytes_per_sec": 0, 00:04:20.688 "w_mbytes_per_sec": 0 00:04:20.688 }, 00:04:20.688 "block_size": 512, 00:04:20.688 "claim_type": "exclusive_write", 00:04:20.688 "claimed": true, 00:04:20.688 "driver_specific": {}, 00:04:20.688 "memory_domains": [ 00:04:20.688 { 00:04:20.688 "dma_device_id": "system", 00:04:20.688 "dma_device_type": 1 00:04:20.688 }, 00:04:20.688 { 00:04:20.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.688 "dma_device_type": 2 00:04:20.688 } 00:04:20.688 ], 00:04:20.688 "name": "Malloc3", 00:04:20.688 "num_blocks": 16384, 00:04:20.688 "product_name": "Malloc disk", 00:04:20.688 "supported_io_types": { 00:04:20.688 "abort": true, 00:04:20.688 "compare": false, 00:04:20.688 "compare_and_write": false, 00:04:20.688 "copy": true, 00:04:20.688 "flush": true, 00:04:20.688 "get_zone_info": false, 00:04:20.688 "nvme_admin": false, 00:04:20.688 "nvme_io": false, 00:04:20.688 "nvme_io_md": false, 00:04:20.688 "nvme_iov_md": false, 00:04:20.688 "read": true, 00:04:20.688 "reset": true, 00:04:20.688 "seek_data": false, 00:04:20.688 "seek_hole": false, 00:04:20.688 "unmap": true, 00:04:20.688 "write": true, 00:04:20.688 "write_zeroes": true, 00:04:20.688 "zcopy": true, 00:04:20.688 "zone_append": false, 00:04:20.688 "zone_management": false 00:04:20.688 }, 00:04:20.688 "uuid": "8f2cf20b-b62e-49ef-93a2-0b5dd588604e", 00:04:20.688 "zoned": false 00:04:20.688 }, 00:04:20.688 { 00:04:20.688 "aliases": [ 00:04:20.688 "c435f8b8-efca-5309-89e5-042447f52faf" 00:04:20.688 ], 00:04:20.688 "assigned_rate_limits": { 00:04:20.688 "r_mbytes_per_sec": 0, 00:04:20.688 "rw_ios_per_sec": 0, 00:04:20.688 "rw_mbytes_per_sec": 0, 00:04:20.688 "w_mbytes_per_sec": 0 00:04:20.688 }, 00:04:20.688 "block_size": 512, 00:04:20.688 "claimed": false, 00:04:20.688 "driver_specific": { 00:04:20.688 "passthru": { 00:04:20.688 "base_bdev_name": "Malloc3", 00:04:20.688 "name": "Passthru0" 00:04:20.688 } 00:04:20.688 }, 00:04:20.688 "memory_domains": [ 00:04:20.688 { 00:04:20.688 "dma_device_id": "system", 00:04:20.688 "dma_device_type": 1 00:04:20.688 }, 00:04:20.688 { 00:04:20.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.688 "dma_device_type": 2 00:04:20.688 } 00:04:20.688 ], 00:04:20.688 "name": "Passthru0", 00:04:20.688 "num_blocks": 16384, 00:04:20.688 "product_name": "passthru", 00:04:20.688 "supported_io_types": { 00:04:20.688 "abort": true, 00:04:20.688 "compare": false, 00:04:20.688 "compare_and_write": false, 00:04:20.688 "copy": true, 00:04:20.688 "flush": true, 00:04:20.688 "get_zone_info": false, 00:04:20.688 "nvme_admin": false, 00:04:20.688 "nvme_io": false, 00:04:20.688 "nvme_io_md": false, 00:04:20.688 "nvme_iov_md": false, 00:04:20.688 "read": true, 00:04:20.688 "reset": true, 00:04:20.688 "seek_data": false, 00:04:20.688 "seek_hole": false, 00:04:20.688 "unmap": true, 00:04:20.688 "write": true, 00:04:20.688 "write_zeroes": true, 00:04:20.688 "zcopy": true, 00:04:20.688 "zone_append": false, 00:04:20.688 "zone_management": false 00:04:20.688 }, 00:04:20.688 "uuid": "c435f8b8-efca-5309-89e5-042447f52faf", 00:04:20.688 "zoned": false 00:04:20.688 } 00:04:20.688 ]' 00:04:20.688 17:03:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:20.688 17:03:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:20.688 17:03:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:20.688 17:03:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.688 17:03:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.688 17:03:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.688 17:03:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:04:20.688 17:03:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.688 17:03:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.688 17:03:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.688 17:03:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:20.688 17:03:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.688 17:03:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.688 17:03:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.688 17:03:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:20.688 17:03:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:20.688 ************************************ 00:04:20.688 END TEST rpc_daemon_integrity 00:04:20.688 ************************************ 00:04:20.688 17:03:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:20.688 00:04:20.688 real 0m0.319s 00:04:20.688 user 0m0.206s 00:04:20.688 sys 0m0.045s 00:04:20.688 17:03:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.688 17:03:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.688 17:03:13 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:20.688 17:03:13 rpc -- rpc/rpc.sh@84 -- # killprocess 58482 00:04:20.688 17:03:13 rpc -- common/autotest_common.sh@954 -- # '[' -z 58482 ']' 00:04:20.688 17:03:13 rpc -- common/autotest_common.sh@958 -- # kill -0 58482 00:04:20.688 17:03:13 rpc -- common/autotest_common.sh@959 -- # uname 00:04:20.688 17:03:13 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:20.688 17:03:13 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58482 00:04:20.947 killing process with pid 58482 00:04:20.947 17:03:13 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:20.947 17:03:13 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:20.947 17:03:13 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58482' 00:04:20.947 17:03:13 rpc -- common/autotest_common.sh@973 -- # kill 58482 00:04:20.947 17:03:13 rpc -- common/autotest_common.sh@978 -- # wait 58482 00:04:21.206 00:04:21.206 real 0m3.310s 00:04:21.206 user 0m4.392s 00:04:21.206 sys 0m0.786s 00:04:21.206 17:03:13 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.206 17:03:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.206 ************************************ 00:04:21.206 END TEST rpc 00:04:21.206 ************************************ 00:04:21.206 17:03:13 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:21.206 17:03:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.206 17:03:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.206 17:03:13 -- common/autotest_common.sh@10 -- # set +x 00:04:21.206 ************************************ 00:04:21.206 START TEST skip_rpc 00:04:21.206 ************************************ 00:04:21.206 17:03:13 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:21.206 * Looking for test storage... 00:04:21.465 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:21.465 17:03:13 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:21.465 17:03:13 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:21.465 17:03:13 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:21.465 17:03:13 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:21.465 17:03:13 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.465 17:03:13 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.465 17:03:13 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.465 17:03:13 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.465 17:03:13 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.465 17:03:13 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.465 17:03:13 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.465 17:03:13 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.465 17:03:13 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.465 17:03:13 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.465 17:03:13 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.465 17:03:13 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:21.465 17:03:13 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:21.465 17:03:13 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.465 17:03:13 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.465 17:03:13 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:21.465 17:03:13 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:21.465 17:03:13 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.465 17:03:13 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:21.465 17:03:13 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.465 17:03:13 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:21.465 17:03:13 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:21.465 17:03:13 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.465 17:03:13 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:21.465 17:03:13 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.465 17:03:13 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.465 17:03:13 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.465 17:03:13 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:21.465 17:03:13 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.465 17:03:13 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:21.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.465 --rc genhtml_branch_coverage=1 00:04:21.465 --rc genhtml_function_coverage=1 00:04:21.465 --rc genhtml_legend=1 00:04:21.465 --rc geninfo_all_blocks=1 00:04:21.465 --rc geninfo_unexecuted_blocks=1 00:04:21.465 00:04:21.465 ' 00:04:21.465 17:03:13 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:21.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.465 --rc genhtml_branch_coverage=1 00:04:21.465 --rc genhtml_function_coverage=1 00:04:21.465 --rc genhtml_legend=1 00:04:21.465 --rc geninfo_all_blocks=1 00:04:21.465 --rc geninfo_unexecuted_blocks=1 00:04:21.465 00:04:21.465 ' 00:04:21.465 17:03:13 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:21.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.465 --rc genhtml_branch_coverage=1 00:04:21.465 --rc genhtml_function_coverage=1 00:04:21.466 --rc genhtml_legend=1 00:04:21.466 --rc geninfo_all_blocks=1 00:04:21.466 --rc geninfo_unexecuted_blocks=1 00:04:21.466 00:04:21.466 ' 00:04:21.466 17:03:13 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:21.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.466 --rc genhtml_branch_coverage=1 00:04:21.466 --rc genhtml_function_coverage=1 00:04:21.466 --rc genhtml_legend=1 00:04:21.466 --rc geninfo_all_blocks=1 00:04:21.466 --rc geninfo_unexecuted_blocks=1 00:04:21.466 00:04:21.466 ' 00:04:21.466 17:03:13 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:21.466 17:03:13 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:21.466 17:03:13 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:21.466 17:03:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.466 17:03:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.466 17:03:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.466 ************************************ 00:04:21.466 START TEST skip_rpc 00:04:21.466 ************************************ 00:04:21.466 17:03:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:21.466 17:03:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58756 00:04:21.466 17:03:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:21.466 17:03:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.466 17:03:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:21.466 [2024-11-19 17:03:13.896768] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:21.466 [2024-11-19 17:03:13.897073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58756 ] 00:04:21.724 [2024-11-19 17:03:14.042553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.724 [2024-11-19 17:03:14.107879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.988 17:03:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:26.988 17:03:18 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:26.988 17:03:18 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:26.988 17:03:18 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:26.988 17:03:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:26.989 17:03:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:26.989 17:03:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:26.989 17:03:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:26.989 17:03:18 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.989 17:03:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.989 2024/11/19 17:03:18 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:04:26.989 17:03:18 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:26.989 17:03:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:26.989 17:03:18 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:26.989 17:03:18 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:26.989 17:03:18 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:26.989 17:03:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:26.989 17:03:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58756 00:04:26.989 17:03:18 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58756 ']' 00:04:26.989 17:03:18 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58756 00:04:26.989 17:03:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:26.989 17:03:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:26.989 17:03:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58756 00:04:26.989 17:03:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:26.989 17:03:18 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:26.989 17:03:18 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58756' 00:04:26.989 killing process with pid 58756 00:04:26.989 17:03:18 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58756 00:04:26.989 17:03:18 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58756 00:04:26.989 00:04:26.989 real 0m5.601s 00:04:26.989 user 0m5.246s 00:04:26.989 sys 0m0.270s 00:04:26.989 17:03:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.989 ************************************ 00:04:26.989 END TEST skip_rpc 00:04:26.989 ************************************ 00:04:26.989 17:03:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.989 17:03:19 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:26.989 17:03:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.989 17:03:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.989 17:03:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.989 ************************************ 00:04:26.989 START TEST skip_rpc_with_json 00:04:26.989 ************************************ 00:04:26.989 17:03:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:26.989 17:03:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:26.989 17:03:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58843 00:04:26.989 17:03:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.989 17:03:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:26.989 17:03:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58843 00:04:26.989 17:03:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58843 ']' 00:04:26.989 17:03:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.989 17:03:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.989 17:03:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.989 17:03:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.989 17:03:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:27.247 [2024-11-19 17:03:19.552668] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:27.247 [2024-11-19 17:03:19.552777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58843 ] 00:04:27.247 [2024-11-19 17:03:19.697922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.247 [2024-11-19 17:03:19.773757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.839 17:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.839 17:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:27.839 17:03:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:27.839 17:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.839 17:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:27.839 [2024-11-19 17:03:20.156974] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:27.839 2024/11/19 17:03:20 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:04:27.839 request: 00:04:27.839 { 00:04:27.839 "method": "nvmf_get_transports", 00:04:27.839 "params": { 00:04:27.839 "trtype": "tcp" 00:04:27.839 } 00:04:27.839 } 00:04:27.839 Got JSON-RPC error response 00:04:27.839 GoRPCClient: error on JSON-RPC call 00:04:27.839 17:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:27.839 17:03:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:27.839 17:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.839 17:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:27.839 [2024-11-19 17:03:20.169089] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:27.839 17:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.839 17:03:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:27.839 17:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.839 17:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.101 17:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.101 17:03:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:28.101 { 00:04:28.101 "subsystems": [ 00:04:28.101 { 00:04:28.101 "subsystem": "fsdev", 00:04:28.101 "config": [ 00:04:28.101 { 00:04:28.101 "method": "fsdev_set_opts", 00:04:28.101 "params": { 00:04:28.101 "fsdev_io_cache_size": 256, 00:04:28.101 "fsdev_io_pool_size": 65535 00:04:28.101 } 00:04:28.101 } 00:04:28.101 ] 00:04:28.101 }, 00:04:28.101 { 00:04:28.101 "subsystem": "keyring", 00:04:28.101 "config": [] 00:04:28.101 }, 00:04:28.101 { 00:04:28.101 "subsystem": "iobuf", 00:04:28.101 "config": [ 00:04:28.101 { 00:04:28.101 "method": "iobuf_set_options", 00:04:28.101 "params": { 00:04:28.101 "enable_numa": false, 00:04:28.101 "large_bufsize": 135168, 00:04:28.101 "large_pool_count": 1024, 00:04:28.101 "small_bufsize": 8192, 00:04:28.101 "small_pool_count": 8192 00:04:28.101 } 00:04:28.101 } 00:04:28.101 ] 00:04:28.101 }, 00:04:28.101 { 00:04:28.101 "subsystem": "sock", 00:04:28.101 "config": [ 00:04:28.101 { 00:04:28.101 "method": "sock_set_default_impl", 00:04:28.101 "params": { 00:04:28.101 "impl_name": "posix" 00:04:28.101 } 00:04:28.101 }, 00:04:28.101 { 00:04:28.101 "method": "sock_impl_set_options", 00:04:28.101 "params": { 00:04:28.101 "enable_ktls": false, 00:04:28.101 "enable_placement_id": 0, 00:04:28.101 "enable_quickack": false, 00:04:28.101 "enable_recv_pipe": true, 00:04:28.101 "enable_zerocopy_send_client": false, 00:04:28.101 "enable_zerocopy_send_server": true, 00:04:28.101 "impl_name": "ssl", 00:04:28.101 "recv_buf_size": 4096, 00:04:28.101 "send_buf_size": 4096, 00:04:28.101 "tls_version": 0, 00:04:28.101 "zerocopy_threshold": 0 00:04:28.101 } 00:04:28.101 }, 00:04:28.101 { 00:04:28.101 "method": "sock_impl_set_options", 00:04:28.101 "params": { 00:04:28.101 "enable_ktls": false, 00:04:28.101 "enable_placement_id": 0, 00:04:28.101 "enable_quickack": false, 00:04:28.101 "enable_recv_pipe": true, 00:04:28.101 "enable_zerocopy_send_client": false, 00:04:28.101 "enable_zerocopy_send_server": true, 00:04:28.101 "impl_name": "posix", 00:04:28.101 "recv_buf_size": 2097152, 00:04:28.101 "send_buf_size": 2097152, 00:04:28.101 "tls_version": 0, 00:04:28.101 "zerocopy_threshold": 0 00:04:28.101 } 00:04:28.101 } 00:04:28.101 ] 00:04:28.101 }, 00:04:28.101 { 00:04:28.101 "subsystem": "vmd", 00:04:28.101 "config": [] 00:04:28.101 }, 00:04:28.101 { 00:04:28.101 "subsystem": "accel", 00:04:28.101 "config": [ 00:04:28.101 { 00:04:28.101 "method": "accel_set_options", 00:04:28.101 "params": { 00:04:28.101 "buf_count": 2048, 00:04:28.101 "large_cache_size": 16, 00:04:28.101 "sequence_count": 2048, 00:04:28.101 "small_cache_size": 128, 00:04:28.101 "task_count": 2048 00:04:28.101 } 00:04:28.101 } 00:04:28.101 ] 00:04:28.101 }, 00:04:28.101 { 00:04:28.101 "subsystem": "bdev", 00:04:28.101 "config": [ 00:04:28.101 { 00:04:28.101 "method": "bdev_set_options", 00:04:28.101 "params": { 00:04:28.101 "bdev_auto_examine": true, 00:04:28.101 "bdev_io_cache_size": 256, 00:04:28.101 "bdev_io_pool_size": 65535, 00:04:28.101 "iobuf_large_cache_size": 16, 00:04:28.101 "iobuf_small_cache_size": 128 00:04:28.101 } 00:04:28.101 }, 00:04:28.101 { 00:04:28.101 "method": "bdev_raid_set_options", 00:04:28.101 "params": { 00:04:28.101 "process_max_bandwidth_mb_sec": 0, 00:04:28.101 "process_window_size_kb": 1024 00:04:28.101 } 00:04:28.101 }, 00:04:28.101 { 00:04:28.101 "method": "bdev_iscsi_set_options", 00:04:28.101 "params": { 00:04:28.101 "timeout_sec": 30 00:04:28.101 } 00:04:28.101 }, 00:04:28.101 { 00:04:28.101 "method": "bdev_nvme_set_options", 00:04:28.101 "params": { 00:04:28.101 "action_on_timeout": "none", 00:04:28.101 "allow_accel_sequence": false, 00:04:28.101 "arbitration_burst": 0, 00:04:28.101 "bdev_retry_count": 3, 00:04:28.101 "ctrlr_loss_timeout_sec": 0, 00:04:28.101 "delay_cmd_submit": true, 00:04:28.101 "dhchap_dhgroups": [ 00:04:28.101 "null", 00:04:28.101 "ffdhe2048", 00:04:28.101 "ffdhe3072", 00:04:28.101 "ffdhe4096", 00:04:28.101 "ffdhe6144", 00:04:28.101 "ffdhe8192" 00:04:28.101 ], 00:04:28.101 "dhchap_digests": [ 00:04:28.101 "sha256", 00:04:28.101 "sha384", 00:04:28.101 "sha512" 00:04:28.101 ], 00:04:28.101 "disable_auto_failback": false, 00:04:28.101 "fast_io_fail_timeout_sec": 0, 00:04:28.101 "generate_uuids": false, 00:04:28.101 "high_priority_weight": 0, 00:04:28.101 "io_path_stat": false, 00:04:28.101 "io_queue_requests": 0, 00:04:28.101 "keep_alive_timeout_ms": 10000, 00:04:28.101 "low_priority_weight": 0, 00:04:28.101 "medium_priority_weight": 0, 00:04:28.101 "nvme_adminq_poll_period_us": 10000, 00:04:28.101 "nvme_error_stat": false, 00:04:28.101 "nvme_ioq_poll_period_us": 0, 00:04:28.101 "rdma_cm_event_timeout_ms": 0, 00:04:28.101 "rdma_max_cq_size": 0, 00:04:28.101 "rdma_srq_size": 0, 00:04:28.101 "reconnect_delay_sec": 0, 00:04:28.101 "timeout_admin_us": 0, 00:04:28.101 "timeout_us": 0, 00:04:28.101 "transport_ack_timeout": 0, 00:04:28.101 "transport_retry_count": 4, 00:04:28.101 "transport_tos": 0 00:04:28.101 } 00:04:28.101 }, 00:04:28.101 { 00:04:28.101 "method": "bdev_nvme_set_hotplug", 00:04:28.101 "params": { 00:04:28.101 "enable": false, 00:04:28.101 "period_us": 100000 00:04:28.101 } 00:04:28.101 }, 00:04:28.101 { 00:04:28.101 "method": "bdev_wait_for_examine" 00:04:28.101 } 00:04:28.101 ] 00:04:28.101 }, 00:04:28.101 { 00:04:28.101 "subsystem": "scsi", 00:04:28.101 "config": null 00:04:28.101 }, 00:04:28.101 { 00:04:28.101 "subsystem": "scheduler", 00:04:28.101 "config": [ 00:04:28.101 { 00:04:28.101 "method": "framework_set_scheduler", 00:04:28.101 "params": { 00:04:28.101 "name": "static" 00:04:28.101 } 00:04:28.101 } 00:04:28.101 ] 00:04:28.101 }, 00:04:28.101 { 00:04:28.101 "subsystem": "vhost_scsi", 00:04:28.101 "config": [] 00:04:28.101 }, 00:04:28.101 { 00:04:28.101 "subsystem": "vhost_blk", 00:04:28.101 "config": [] 00:04:28.101 }, 00:04:28.102 { 00:04:28.102 "subsystem": "ublk", 00:04:28.102 "config": [] 00:04:28.102 }, 00:04:28.102 { 00:04:28.102 "subsystem": "nbd", 00:04:28.102 "config": [] 00:04:28.102 }, 00:04:28.102 { 00:04:28.102 "subsystem": "nvmf", 00:04:28.102 "config": [ 00:04:28.102 { 00:04:28.102 "method": "nvmf_set_config", 00:04:28.102 "params": { 00:04:28.102 "admin_cmd_passthru": { 00:04:28.102 "identify_ctrlr": false 00:04:28.102 }, 00:04:28.102 "dhchap_dhgroups": [ 00:04:28.102 "null", 00:04:28.102 "ffdhe2048", 00:04:28.102 "ffdhe3072", 00:04:28.102 "ffdhe4096", 00:04:28.102 "ffdhe6144", 00:04:28.102 "ffdhe8192" 00:04:28.102 ], 00:04:28.102 "dhchap_digests": [ 00:04:28.102 "sha256", 00:04:28.102 "sha384", 00:04:28.102 "sha512" 00:04:28.102 ], 00:04:28.102 "discovery_filter": "match_any" 00:04:28.102 } 00:04:28.102 }, 00:04:28.102 { 00:04:28.102 "method": "nvmf_set_max_subsystems", 00:04:28.102 "params": { 00:04:28.102 "max_subsystems": 1024 00:04:28.102 } 00:04:28.102 }, 00:04:28.102 { 00:04:28.102 "method": "nvmf_set_crdt", 00:04:28.102 "params": { 00:04:28.102 "crdt1": 0, 00:04:28.102 "crdt2": 0, 00:04:28.102 "crdt3": 0 00:04:28.102 } 00:04:28.102 }, 00:04:28.102 { 00:04:28.102 "method": "nvmf_create_transport", 00:04:28.102 "params": { 00:04:28.102 "abort_timeout_sec": 1, 00:04:28.102 "ack_timeout": 0, 00:04:28.102 "buf_cache_size": 4294967295, 00:04:28.102 "c2h_success": true, 00:04:28.102 "data_wr_pool_size": 0, 00:04:28.102 "dif_insert_or_strip": false, 00:04:28.102 "in_capsule_data_size": 4096, 00:04:28.102 "io_unit_size": 131072, 00:04:28.102 "max_aq_depth": 128, 00:04:28.102 "max_io_qpairs_per_ctrlr": 127, 00:04:28.102 "max_io_size": 131072, 00:04:28.102 "max_queue_depth": 128, 00:04:28.102 "num_shared_buffers": 511, 00:04:28.102 "sock_priority": 0, 00:04:28.102 "trtype": "TCP", 00:04:28.102 "zcopy": false 00:04:28.102 } 00:04:28.102 } 00:04:28.102 ] 00:04:28.102 }, 00:04:28.102 { 00:04:28.102 "subsystem": "iscsi", 00:04:28.102 "config": [ 00:04:28.102 { 00:04:28.102 "method": "iscsi_set_options", 00:04:28.102 "params": { 00:04:28.102 "allow_duplicated_isid": false, 00:04:28.102 "chap_group": 0, 00:04:28.102 "data_out_pool_size": 2048, 00:04:28.102 "default_time2retain": 20, 00:04:28.102 "default_time2wait": 2, 00:04:28.102 "disable_chap": false, 00:04:28.102 "error_recovery_level": 0, 00:04:28.102 "first_burst_length": 8192, 00:04:28.102 "immediate_data": true, 00:04:28.102 "immediate_data_pool_size": 16384, 00:04:28.102 "max_connections_per_session": 2, 00:04:28.102 "max_large_datain_per_connection": 64, 00:04:28.102 "max_queue_depth": 64, 00:04:28.102 "max_r2t_per_connection": 4, 00:04:28.102 "max_sessions": 128, 00:04:28.102 "mutual_chap": false, 00:04:28.102 "node_base": "iqn.2016-06.io.spdk", 00:04:28.102 "nop_in_interval": 30, 00:04:28.102 "nop_timeout": 60, 00:04:28.102 "pdu_pool_size": 36864, 00:04:28.102 "require_chap": false 00:04:28.102 } 00:04:28.102 } 00:04:28.102 ] 00:04:28.102 } 00:04:28.102 ] 00:04:28.102 } 00:04:28.102 17:03:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:28.102 17:03:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58843 00:04:28.102 17:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58843 ']' 00:04:28.102 17:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58843 00:04:28.102 17:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:28.102 17:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.102 17:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58843 00:04:28.102 killing process with pid 58843 00:04:28.102 17:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.102 17:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.102 17:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58843' 00:04:28.102 17:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58843 00:04:28.102 17:03:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58843 00:04:28.668 17:03:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58875 00:04:28.668 17:03:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:28.668 17:03:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:33.938 17:03:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58875 00:04:33.938 17:03:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58875 ']' 00:04:33.938 17:03:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58875 00:04:33.938 17:03:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:33.938 17:03:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.938 17:03:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58875 00:04:33.938 killing process with pid 58875 00:04:33.938 17:03:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.938 17:03:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.938 17:03:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58875' 00:04:33.938 17:03:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58875 00:04:33.938 17:03:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58875 00:04:34.198 17:03:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:34.198 17:03:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:34.198 00:04:34.198 real 0m7.048s 00:04:34.198 user 0m6.399s 00:04:34.198 sys 0m0.835s 00:04:34.198 ************************************ 00:04:34.198 END TEST skip_rpc_with_json 00:04:34.198 ************************************ 00:04:34.198 17:03:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.198 17:03:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.198 17:03:26 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:34.198 17:03:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.198 17:03:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.198 17:03:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.198 ************************************ 00:04:34.198 START TEST skip_rpc_with_delay 00:04:34.198 ************************************ 00:04:34.198 17:03:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:34.198 17:03:26 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:34.198 17:03:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:34.198 17:03:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:34.198 17:03:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.198 17:03:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:34.198 17:03:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.198 17:03:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:34.199 17:03:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.199 17:03:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:34.199 17:03:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.199 17:03:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:34.199 17:03:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:34.199 [2024-11-19 17:03:26.684724] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:34.199 17:03:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:34.199 17:03:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:34.199 17:03:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:34.199 17:03:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:34.199 00:04:34.199 real 0m0.119s 00:04:34.199 user 0m0.082s 00:04:34.199 sys 0m0.035s 00:04:34.199 ************************************ 00:04:34.199 END TEST skip_rpc_with_delay 00:04:34.199 17:03:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.199 17:03:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:34.199 ************************************ 00:04:34.458 17:03:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:34.458 17:03:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:34.458 17:03:26 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:34.458 17:03:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.458 17:03:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.458 17:03:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.458 ************************************ 00:04:34.458 START TEST exit_on_failed_rpc_init 00:04:34.458 ************************************ 00:04:34.458 17:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:34.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.458 17:03:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58990 00:04:34.458 17:03:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58990 00:04:34.458 17:03:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:34.458 17:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58990 ']' 00:04:34.458 17:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.458 17:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.458 17:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.458 17:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.458 17:03:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:34.458 [2024-11-19 17:03:26.839948] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:34.458 [2024-11-19 17:03:26.840049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58990 ] 00:04:34.458 [2024-11-19 17:03:26.985490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.717 [2024-11-19 17:03:27.059293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.976 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.976 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:34.976 17:03:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.976 17:03:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:34.976 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:34.976 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:34.976 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.976 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:34.976 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.976 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:34.976 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.976 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:34.976 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.976 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:34.976 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:35.234 [2024-11-19 17:03:27.518205] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:35.234 [2024-11-19 17:03:27.518368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59006 ] 00:04:35.234 [2024-11-19 17:03:27.679726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.234 [2024-11-19 17:03:27.753843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.234 [2024-11-19 17:03:27.753954] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:35.234 [2024-11-19 17:03:27.753972] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:35.234 [2024-11-19 17:03:27.753982] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:35.493 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:35.493 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:35.493 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:35.493 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:35.493 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:35.493 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:35.493 17:03:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:35.493 17:03:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58990 00:04:35.493 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58990 ']' 00:04:35.493 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58990 00:04:35.493 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:35.493 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.493 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58990 00:04:35.493 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.493 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.493 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58990' 00:04:35.493 killing process with pid 58990 00:04:35.493 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58990 00:04:35.493 17:03:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58990 00:04:36.061 00:04:36.061 real 0m1.696s 00:04:36.061 user 0m1.739s 00:04:36.061 sys 0m0.486s 00:04:36.061 ************************************ 00:04:36.061 END TEST exit_on_failed_rpc_init 00:04:36.061 ************************************ 00:04:36.061 17:03:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.061 17:03:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:36.061 17:03:28 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:36.061 ************************************ 00:04:36.061 END TEST skip_rpc 00:04:36.061 ************************************ 00:04:36.061 00:04:36.061 real 0m14.854s 00:04:36.061 user 0m13.626s 00:04:36.061 sys 0m1.847s 00:04:36.061 17:03:28 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.061 17:03:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.061 17:03:28 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:36.061 17:03:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.061 17:03:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.061 17:03:28 -- common/autotest_common.sh@10 -- # set +x 00:04:36.061 ************************************ 00:04:36.061 START TEST rpc_client 00:04:36.061 ************************************ 00:04:36.061 17:03:28 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:36.320 * Looking for test storage... 00:04:36.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:36.320 17:03:28 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:36.320 17:03:28 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:36.320 17:03:28 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:36.320 17:03:28 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:36.320 17:03:28 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.320 17:03:28 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.320 17:03:28 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.320 17:03:28 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.320 17:03:28 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.320 17:03:28 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.320 17:03:28 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.320 17:03:28 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.320 17:03:28 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.320 17:03:28 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.320 17:03:28 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.320 17:03:28 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:36.320 17:03:28 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:36.320 17:03:28 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.320 17:03:28 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.320 17:03:28 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:36.320 17:03:28 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:36.320 17:03:28 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.320 17:03:28 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:36.320 17:03:28 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.320 17:03:28 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:36.320 17:03:28 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:36.320 17:03:28 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.320 17:03:28 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:36.320 17:03:28 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.320 17:03:28 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.320 17:03:28 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.320 17:03:28 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:36.320 17:03:28 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.320 17:03:28 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:36.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.320 --rc genhtml_branch_coverage=1 00:04:36.320 --rc genhtml_function_coverage=1 00:04:36.320 --rc genhtml_legend=1 00:04:36.320 --rc geninfo_all_blocks=1 00:04:36.320 --rc geninfo_unexecuted_blocks=1 00:04:36.320 00:04:36.320 ' 00:04:36.320 17:03:28 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:36.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.320 --rc genhtml_branch_coverage=1 00:04:36.320 --rc genhtml_function_coverage=1 00:04:36.320 --rc genhtml_legend=1 00:04:36.320 --rc geninfo_all_blocks=1 00:04:36.320 --rc geninfo_unexecuted_blocks=1 00:04:36.320 00:04:36.320 ' 00:04:36.320 17:03:28 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:36.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.321 --rc genhtml_branch_coverage=1 00:04:36.321 --rc genhtml_function_coverage=1 00:04:36.321 --rc genhtml_legend=1 00:04:36.321 --rc geninfo_all_blocks=1 00:04:36.321 --rc geninfo_unexecuted_blocks=1 00:04:36.321 00:04:36.321 ' 00:04:36.321 17:03:28 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:36.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.321 --rc genhtml_branch_coverage=1 00:04:36.321 --rc genhtml_function_coverage=1 00:04:36.321 --rc genhtml_legend=1 00:04:36.321 --rc geninfo_all_blocks=1 00:04:36.321 --rc geninfo_unexecuted_blocks=1 00:04:36.321 00:04:36.321 ' 00:04:36.321 17:03:28 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:36.321 OK 00:04:36.321 17:03:28 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:36.321 ************************************ 00:04:36.321 END TEST rpc_client 00:04:36.321 ************************************ 00:04:36.321 00:04:36.321 real 0m0.247s 00:04:36.321 user 0m0.157s 00:04:36.321 sys 0m0.099s 00:04:36.321 17:03:28 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.321 17:03:28 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:36.586 17:03:28 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:36.586 17:03:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.586 17:03:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.586 17:03:28 -- common/autotest_common.sh@10 -- # set +x 00:04:36.586 ************************************ 00:04:36.586 START TEST json_config 00:04:36.586 ************************************ 00:04:36.586 17:03:28 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:36.586 17:03:28 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:36.586 17:03:28 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:36.586 17:03:28 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:36.586 17:03:29 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:36.586 17:03:29 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.586 17:03:29 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.586 17:03:29 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.586 17:03:29 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.586 17:03:29 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.586 17:03:29 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.586 17:03:29 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.586 17:03:29 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.586 17:03:29 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.586 17:03:29 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.586 17:03:29 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.586 17:03:29 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:36.586 17:03:29 json_config -- scripts/common.sh@345 -- # : 1 00:04:36.587 17:03:29 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.587 17:03:29 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.587 17:03:29 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:36.587 17:03:29 json_config -- scripts/common.sh@353 -- # local d=1 00:04:36.587 17:03:29 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.587 17:03:29 json_config -- scripts/common.sh@355 -- # echo 1 00:04:36.587 17:03:29 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.587 17:03:29 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:36.587 17:03:29 json_config -- scripts/common.sh@353 -- # local d=2 00:04:36.587 17:03:29 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.587 17:03:29 json_config -- scripts/common.sh@355 -- # echo 2 00:04:36.587 17:03:29 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.587 17:03:29 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.587 17:03:29 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.587 17:03:29 json_config -- scripts/common.sh@368 -- # return 0 00:04:36.587 17:03:29 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.587 17:03:29 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:36.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.587 --rc genhtml_branch_coverage=1 00:04:36.587 --rc genhtml_function_coverage=1 00:04:36.587 --rc genhtml_legend=1 00:04:36.587 --rc geninfo_all_blocks=1 00:04:36.587 --rc geninfo_unexecuted_blocks=1 00:04:36.587 00:04:36.587 ' 00:04:36.587 17:03:29 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:36.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.587 --rc genhtml_branch_coverage=1 00:04:36.587 --rc genhtml_function_coverage=1 00:04:36.587 --rc genhtml_legend=1 00:04:36.587 --rc geninfo_all_blocks=1 00:04:36.587 --rc geninfo_unexecuted_blocks=1 00:04:36.587 00:04:36.587 ' 00:04:36.587 17:03:29 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:36.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.587 --rc genhtml_branch_coverage=1 00:04:36.587 --rc genhtml_function_coverage=1 00:04:36.587 --rc genhtml_legend=1 00:04:36.587 --rc geninfo_all_blocks=1 00:04:36.587 --rc geninfo_unexecuted_blocks=1 00:04:36.587 00:04:36.587 ' 00:04:36.587 17:03:29 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:36.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.587 --rc genhtml_branch_coverage=1 00:04:36.587 --rc genhtml_function_coverage=1 00:04:36.587 --rc genhtml_legend=1 00:04:36.587 --rc geninfo_all_blocks=1 00:04:36.587 --rc geninfo_unexecuted_blocks=1 00:04:36.587 00:04:36.587 ' 00:04:36.587 17:03:29 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:36.587 17:03:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:36.587 17:03:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:36.587 17:03:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:36.587 17:03:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:36.587 17:03:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:36.587 17:03:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:36.587 17:03:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:36.587 17:03:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:36.587 17:03:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:36.587 17:03:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:36.587 17:03:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:36.587 17:03:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:04:36.587 17:03:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:04:36.587 17:03:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:36.587 17:03:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:36.587 17:03:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:36.587 17:03:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:36.587 17:03:29 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:36.587 17:03:29 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:36.587 17:03:29 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:36.587 17:03:29 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:36.587 17:03:29 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:36.587 17:03:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.587 17:03:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.587 17:03:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.587 17:03:29 json_config -- paths/export.sh@5 -- # export PATH 00:04:36.587 17:03:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.587 17:03:29 json_config -- nvmf/common.sh@51 -- # : 0 00:04:36.587 17:03:29 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:36.587 17:03:29 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:36.587 17:03:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:36.587 17:03:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:36.587 17:03:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:36.587 17:03:29 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:36.587 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:36.587 17:03:29 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:36.587 17:03:29 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:36.587 17:03:29 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:36.587 17:03:29 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:36.587 17:03:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:36.587 17:03:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:36.587 17:03:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:36.587 17:03:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:36.587 17:03:29 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:36.587 17:03:29 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:36.587 17:03:29 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:36.587 17:03:29 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:36.587 17:03:29 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:36.587 17:03:29 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:36.587 17:03:29 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:36.587 17:03:29 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:36.587 17:03:29 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:36.587 17:03:29 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:36.587 17:03:29 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:36.587 INFO: JSON configuration test init 00:04:36.587 17:03:29 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:36.587 17:03:29 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:36.587 17:03:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:36.587 17:03:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.587 17:03:29 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:36.587 17:03:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:36.587 17:03:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.587 17:03:29 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:36.587 17:03:29 json_config -- json_config/common.sh@9 -- # local app=target 00:04:36.587 17:03:29 json_config -- json_config/common.sh@10 -- # shift 00:04:36.587 17:03:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:36.587 17:03:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:36.587 17:03:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:36.587 17:03:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.587 17:03:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.587 Waiting for target to run... 00:04:36.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:36.587 17:03:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59146 00:04:36.587 17:03:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:36.587 17:03:29 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:36.587 17:03:29 json_config -- json_config/common.sh@25 -- # waitforlisten 59146 /var/tmp/spdk_tgt.sock 00:04:36.587 17:03:29 json_config -- common/autotest_common.sh@835 -- # '[' -z 59146 ']' 00:04:36.587 17:03:29 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:36.587 17:03:29 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.587 17:03:29 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:36.587 17:03:29 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.587 17:03:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.852 [2024-11-19 17:03:29.147920] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:36.852 [2024-11-19 17:03:29.148410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59146 ] 00:04:37.120 [2024-11-19 17:03:29.586695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.377 [2024-11-19 17:03:29.650366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.943 00:04:37.943 17:03:30 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.943 17:03:30 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:37.943 17:03:30 json_config -- json_config/common.sh@26 -- # echo '' 00:04:37.943 17:03:30 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:37.943 17:03:30 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:37.943 17:03:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.943 17:03:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.943 17:03:30 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:37.943 17:03:30 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:37.943 17:03:30 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:37.943 17:03:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.943 17:03:30 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:37.943 17:03:30 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:37.943 17:03:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:38.509 17:03:30 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:38.509 17:03:30 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:38.509 17:03:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:38.509 17:03:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.509 17:03:30 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:38.509 17:03:30 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:38.509 17:03:30 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:38.509 17:03:30 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:38.509 17:03:30 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:38.509 17:03:30 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:38.509 17:03:30 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:38.509 17:03:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:38.767 17:03:31 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:38.767 17:03:31 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:38.767 17:03:31 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:38.767 17:03:31 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:38.767 17:03:31 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:38.767 17:03:31 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:38.767 17:03:31 json_config -- json_config/json_config.sh@54 -- # sort 00:04:38.767 17:03:31 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:38.767 17:03:31 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:38.767 17:03:31 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:38.767 17:03:31 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:38.767 17:03:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.767 17:03:31 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:38.767 17:03:31 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:38.767 17:03:31 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:38.767 17:03:31 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:38.767 17:03:31 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:38.767 17:03:31 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:38.767 17:03:31 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:38.767 17:03:31 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:38.767 17:03:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.767 17:03:31 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:38.767 17:03:31 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:38.767 17:03:31 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:38.767 17:03:31 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:38.767 17:03:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:39.025 MallocForNvmf0 00:04:39.283 17:03:31 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:39.283 17:03:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:39.541 MallocForNvmf1 00:04:39.541 17:03:31 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:39.541 17:03:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:39.798 [2024-11-19 17:03:32.167499] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:39.798 17:03:32 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:39.798 17:03:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:40.055 17:03:32 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:40.055 17:03:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:40.312 17:03:32 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:40.312 17:03:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:40.570 17:03:33 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:40.570 17:03:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:40.829 [2024-11-19 17:03:33.324159] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:40.829 17:03:33 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:40.829 17:03:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:40.829 17:03:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.087 17:03:33 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:41.087 17:03:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:41.087 17:03:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.087 17:03:33 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:41.087 17:03:33 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:41.087 17:03:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:41.346 MallocBdevForConfigChangeCheck 00:04:41.346 17:03:33 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:41.346 17:03:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:41.346 17:03:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.346 17:03:33 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:41.346 17:03:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:41.913 INFO: shutting down applications... 00:04:41.913 17:03:34 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:41.913 17:03:34 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:41.913 17:03:34 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:41.913 17:03:34 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:41.913 17:03:34 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:42.172 Calling clear_iscsi_subsystem 00:04:42.172 Calling clear_nvmf_subsystem 00:04:42.172 Calling clear_nbd_subsystem 00:04:42.172 Calling clear_ublk_subsystem 00:04:42.172 Calling clear_vhost_blk_subsystem 00:04:42.172 Calling clear_vhost_scsi_subsystem 00:04:42.172 Calling clear_bdev_subsystem 00:04:42.172 17:03:34 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:42.172 17:03:34 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:42.172 17:03:34 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:42.172 17:03:34 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:42.172 17:03:34 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:42.172 17:03:34 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:42.738 17:03:34 json_config -- json_config/json_config.sh@352 -- # break 00:04:42.738 17:03:34 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:42.738 17:03:34 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:42.738 17:03:34 json_config -- json_config/common.sh@31 -- # local app=target 00:04:42.738 17:03:34 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:42.738 17:03:34 json_config -- json_config/common.sh@35 -- # [[ -n 59146 ]] 00:04:42.738 17:03:34 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59146 00:04:42.738 17:03:34 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:42.738 17:03:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.738 17:03:34 json_config -- json_config/common.sh@41 -- # kill -0 59146 00:04:42.738 17:03:34 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:42.997 17:03:35 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:42.997 17:03:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.997 17:03:35 json_config -- json_config/common.sh@41 -- # kill -0 59146 00:04:42.997 17:03:35 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:42.997 17:03:35 json_config -- json_config/common.sh@43 -- # break 00:04:42.997 17:03:35 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:42.997 SPDK target shutdown done 00:04:42.997 17:03:35 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:42.997 INFO: relaunching applications... 00:04:42.997 17:03:35 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:42.997 17:03:35 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:42.997 17:03:35 json_config -- json_config/common.sh@9 -- # local app=target 00:04:42.997 17:03:35 json_config -- json_config/common.sh@10 -- # shift 00:04:42.997 17:03:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:42.997 17:03:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:42.997 17:03:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:42.997 17:03:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.997 17:03:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.997 17:03:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59432 00:04:42.997 17:03:35 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:42.997 Waiting for target to run... 00:04:42.997 17:03:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:42.997 17:03:35 json_config -- json_config/common.sh@25 -- # waitforlisten 59432 /var/tmp/spdk_tgt.sock 00:04:42.997 17:03:35 json_config -- common/autotest_common.sh@835 -- # '[' -z 59432 ']' 00:04:42.997 17:03:35 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:42.997 17:03:35 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:42.997 17:03:35 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:42.997 17:03:35 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.997 17:03:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.255 [2024-11-19 17:03:35.557869] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:43.255 [2024-11-19 17:03:35.557982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59432 ] 00:04:43.822 [2024-11-19 17:03:36.096298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.822 [2024-11-19 17:03:36.160357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.079 [2024-11-19 17:03:36.522900] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:44.079 [2024-11-19 17:03:36.555040] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:44.337 17:03:36 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.337 17:03:36 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:44.337 00:04:44.337 17:03:36 json_config -- json_config/common.sh@26 -- # echo '' 00:04:44.337 17:03:36 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:44.337 INFO: Checking if target configuration is the same... 00:04:44.337 17:03:36 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:44.337 17:03:36 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:44.337 17:03:36 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:44.337 17:03:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:44.337 + '[' 2 -ne 2 ']' 00:04:44.337 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:44.337 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:44.337 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:44.337 +++ basename /dev/fd/62 00:04:44.337 ++ mktemp /tmp/62.XXX 00:04:44.337 + tmp_file_1=/tmp/62.9TG 00:04:44.337 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:44.337 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:44.337 + tmp_file_2=/tmp/spdk_tgt_config.json.EiH 00:04:44.337 + ret=0 00:04:44.337 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:44.603 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:44.896 + diff -u /tmp/62.9TG /tmp/spdk_tgt_config.json.EiH 00:04:44.896 INFO: JSON config files are the same 00:04:44.896 + echo 'INFO: JSON config files are the same' 00:04:44.896 + rm /tmp/62.9TG /tmp/spdk_tgt_config.json.EiH 00:04:44.896 + exit 0 00:04:44.896 17:03:37 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:44.896 INFO: changing configuration and checking if this can be detected... 00:04:44.896 17:03:37 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:44.896 17:03:37 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:44.896 17:03:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:45.155 17:03:37 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:45.155 17:03:37 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:45.155 17:03:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:45.155 + '[' 2 -ne 2 ']' 00:04:45.155 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:45.155 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:45.155 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:45.155 +++ basename /dev/fd/62 00:04:45.155 ++ mktemp /tmp/62.XXX 00:04:45.155 + tmp_file_1=/tmp/62.z9Y 00:04:45.155 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:45.155 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:45.155 + tmp_file_2=/tmp/spdk_tgt_config.json.wJt 00:04:45.155 + ret=0 00:04:45.155 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:45.414 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:45.672 + diff -u /tmp/62.z9Y /tmp/spdk_tgt_config.json.wJt 00:04:45.672 + ret=1 00:04:45.672 + echo '=== Start of file: /tmp/62.z9Y ===' 00:04:45.672 + cat /tmp/62.z9Y 00:04:45.672 + echo '=== End of file: /tmp/62.z9Y ===' 00:04:45.672 + echo '' 00:04:45.672 + echo '=== Start of file: /tmp/spdk_tgt_config.json.wJt ===' 00:04:45.672 + cat /tmp/spdk_tgt_config.json.wJt 00:04:45.672 + echo '=== End of file: /tmp/spdk_tgt_config.json.wJt ===' 00:04:45.672 + echo '' 00:04:45.672 + rm /tmp/62.z9Y /tmp/spdk_tgt_config.json.wJt 00:04:45.672 + exit 1 00:04:45.672 INFO: configuration change detected. 00:04:45.672 17:03:37 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:45.672 17:03:37 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:45.672 17:03:37 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:45.672 17:03:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:45.672 17:03:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.672 17:03:37 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:45.672 17:03:37 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:45.672 17:03:37 json_config -- json_config/json_config.sh@324 -- # [[ -n 59432 ]] 00:04:45.672 17:03:37 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:45.672 17:03:37 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:45.672 17:03:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:45.672 17:03:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.672 17:03:37 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:45.672 17:03:37 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:45.672 17:03:37 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:45.672 17:03:37 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:45.672 17:03:37 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:45.672 17:03:37 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:45.672 17:03:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:45.672 17:03:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.672 17:03:38 json_config -- json_config/json_config.sh@330 -- # killprocess 59432 00:04:45.672 17:03:38 json_config -- common/autotest_common.sh@954 -- # '[' -z 59432 ']' 00:04:45.672 17:03:38 json_config -- common/autotest_common.sh@958 -- # kill -0 59432 00:04:45.672 17:03:38 json_config -- common/autotest_common.sh@959 -- # uname 00:04:45.672 17:03:38 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.672 17:03:38 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59432 00:04:45.672 17:03:38 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.672 17:03:38 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.672 killing process with pid 59432 00:04:45.672 17:03:38 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59432' 00:04:45.672 17:03:38 json_config -- common/autotest_common.sh@973 -- # kill 59432 00:04:45.672 17:03:38 json_config -- common/autotest_common.sh@978 -- # wait 59432 00:04:45.932 17:03:38 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:45.932 17:03:38 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:45.932 17:03:38 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:45.932 17:03:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.932 17:03:38 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:45.932 INFO: Success 00:04:45.932 17:03:38 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:45.932 00:04:45.932 real 0m9.570s 00:04:45.932 user 0m13.994s 00:04:45.932 sys 0m2.035s 00:04:45.932 17:03:38 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.932 17:03:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.932 ************************************ 00:04:45.932 END TEST json_config 00:04:45.932 ************************************ 00:04:46.191 17:03:38 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:46.191 17:03:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.191 17:03:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.191 17:03:38 -- common/autotest_common.sh@10 -- # set +x 00:04:46.191 ************************************ 00:04:46.191 START TEST json_config_extra_key 00:04:46.191 ************************************ 00:04:46.191 17:03:38 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:46.191 17:03:38 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:46.191 17:03:38 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:46.191 17:03:38 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:46.191 17:03:38 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:46.191 17:03:38 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.191 17:03:38 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:46.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.191 --rc genhtml_branch_coverage=1 00:04:46.191 --rc genhtml_function_coverage=1 00:04:46.191 --rc genhtml_legend=1 00:04:46.191 --rc geninfo_all_blocks=1 00:04:46.191 --rc geninfo_unexecuted_blocks=1 00:04:46.191 00:04:46.191 ' 00:04:46.191 17:03:38 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:46.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.191 --rc genhtml_branch_coverage=1 00:04:46.191 --rc genhtml_function_coverage=1 00:04:46.191 --rc genhtml_legend=1 00:04:46.191 --rc geninfo_all_blocks=1 00:04:46.191 --rc geninfo_unexecuted_blocks=1 00:04:46.191 00:04:46.191 ' 00:04:46.191 17:03:38 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:46.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.191 --rc genhtml_branch_coverage=1 00:04:46.191 --rc genhtml_function_coverage=1 00:04:46.191 --rc genhtml_legend=1 00:04:46.191 --rc geninfo_all_blocks=1 00:04:46.191 --rc geninfo_unexecuted_blocks=1 00:04:46.191 00:04:46.191 ' 00:04:46.191 17:03:38 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:46.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.191 --rc genhtml_branch_coverage=1 00:04:46.191 --rc genhtml_function_coverage=1 00:04:46.191 --rc genhtml_legend=1 00:04:46.191 --rc geninfo_all_blocks=1 00:04:46.191 --rc geninfo_unexecuted_blocks=1 00:04:46.191 00:04:46.191 ' 00:04:46.191 17:03:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:46.191 17:03:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:46.191 17:03:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:46.191 17:03:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:46.191 17:03:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:46.191 17:03:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:46.191 17:03:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:46.191 17:03:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:46.191 17:03:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:46.191 17:03:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:46.191 17:03:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:46.191 17:03:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:46.191 17:03:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:04:46.191 17:03:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:04:46.191 17:03:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:46.191 17:03:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:46.191 17:03:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:46.191 17:03:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:46.191 17:03:38 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:46.191 17:03:38 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:46.191 17:03:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.191 17:03:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.191 17:03:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.191 17:03:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:46.191 17:03:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.191 17:03:38 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:46.191 17:03:38 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:46.192 17:03:38 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:46.192 17:03:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:46.192 17:03:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:46.192 17:03:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:46.192 17:03:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:46.192 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:46.192 17:03:38 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:46.192 17:03:38 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:46.192 17:03:38 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:46.192 17:03:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:46.192 17:03:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:46.192 17:03:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:46.192 17:03:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:46.192 INFO: launching applications... 00:04:46.192 17:03:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:46.192 17:03:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:46.192 17:03:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:46.192 17:03:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:46.192 17:03:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:46.192 17:03:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:46.192 17:03:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:46.192 17:03:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:46.192 17:03:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:46.192 17:03:38 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:46.192 17:03:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:46.192 17:03:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:46.192 17:03:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:46.192 17:03:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:46.192 17:03:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:46.192 17:03:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59620 00:04:46.192 17:03:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:46.192 Waiting for target to run... 00:04:46.192 17:03:38 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:46.192 17:03:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59620 /var/tmp/spdk_tgt.sock 00:04:46.192 17:03:38 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59620 ']' 00:04:46.192 17:03:38 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:46.192 17:03:38 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.192 17:03:38 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:46.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:46.192 17:03:38 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.192 17:03:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:46.450 [2024-11-19 17:03:38.780571] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:46.450 [2024-11-19 17:03:38.780688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59620 ] 00:04:46.709 [2024-11-19 17:03:39.210927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.967 [2024-11-19 17:03:39.273141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.535 00:04:47.535 INFO: shutting down applications... 00:04:47.535 17:03:39 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.535 17:03:39 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:47.535 17:03:39 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:47.535 17:03:39 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:47.535 17:03:39 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:47.535 17:03:39 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:47.535 17:03:39 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:47.535 17:03:39 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59620 ]] 00:04:47.535 17:03:39 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59620 00:04:47.535 17:03:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:47.535 17:03:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.535 17:03:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59620 00:04:47.535 17:03:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:48.102 17:03:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:48.102 17:03:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.102 17:03:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59620 00:04:48.102 17:03:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:48.361 17:03:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:48.361 17:03:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.361 17:03:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59620 00:04:48.361 17:03:40 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:48.361 17:03:40 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:48.361 SPDK target shutdown done 00:04:48.361 Success 00:04:48.361 17:03:40 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:48.361 17:03:40 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:48.361 17:03:40 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:48.361 00:04:48.361 real 0m2.377s 00:04:48.361 user 0m1.985s 00:04:48.361 sys 0m0.489s 00:04:48.361 17:03:40 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.361 ************************************ 00:04:48.361 17:03:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:48.361 END TEST json_config_extra_key 00:04:48.361 ************************************ 00:04:48.620 17:03:40 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:48.620 17:03:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.620 17:03:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.620 17:03:40 -- common/autotest_common.sh@10 -- # set +x 00:04:48.620 ************************************ 00:04:48.620 START TEST alias_rpc 00:04:48.620 ************************************ 00:04:48.620 17:03:40 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:48.620 * Looking for test storage... 00:04:48.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:48.620 17:03:40 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:48.620 17:03:40 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:48.621 17:03:40 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:48.621 17:03:41 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:48.621 17:03:41 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.621 17:03:41 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.621 17:03:41 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.621 17:03:41 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.621 17:03:41 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.621 17:03:41 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.621 17:03:41 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.621 17:03:41 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.621 17:03:41 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.621 17:03:41 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.621 17:03:41 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.621 17:03:41 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:48.621 17:03:41 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:48.621 17:03:41 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.621 17:03:41 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.621 17:03:41 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:48.621 17:03:41 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:48.621 17:03:41 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.621 17:03:41 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:48.621 17:03:41 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.621 17:03:41 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:48.621 17:03:41 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:48.621 17:03:41 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.621 17:03:41 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:48.621 17:03:41 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.621 17:03:41 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.621 17:03:41 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.621 17:03:41 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:48.621 17:03:41 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.621 17:03:41 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:48.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.621 --rc genhtml_branch_coverage=1 00:04:48.621 --rc genhtml_function_coverage=1 00:04:48.621 --rc genhtml_legend=1 00:04:48.621 --rc geninfo_all_blocks=1 00:04:48.621 --rc geninfo_unexecuted_blocks=1 00:04:48.621 00:04:48.621 ' 00:04:48.621 17:03:41 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:48.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.621 --rc genhtml_branch_coverage=1 00:04:48.621 --rc genhtml_function_coverage=1 00:04:48.621 --rc genhtml_legend=1 00:04:48.621 --rc geninfo_all_blocks=1 00:04:48.621 --rc geninfo_unexecuted_blocks=1 00:04:48.621 00:04:48.621 ' 00:04:48.621 17:03:41 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:48.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.621 --rc genhtml_branch_coverage=1 00:04:48.621 --rc genhtml_function_coverage=1 00:04:48.621 --rc genhtml_legend=1 00:04:48.621 --rc geninfo_all_blocks=1 00:04:48.621 --rc geninfo_unexecuted_blocks=1 00:04:48.621 00:04:48.621 ' 00:04:48.621 17:03:41 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:48.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.621 --rc genhtml_branch_coverage=1 00:04:48.621 --rc genhtml_function_coverage=1 00:04:48.621 --rc genhtml_legend=1 00:04:48.621 --rc geninfo_all_blocks=1 00:04:48.621 --rc geninfo_unexecuted_blocks=1 00:04:48.621 00:04:48.621 ' 00:04:48.621 17:03:41 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:48.621 17:03:41 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59718 00:04:48.621 17:03:41 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59718 00:04:48.621 17:03:41 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59718 ']' 00:04:48.621 17:03:41 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.621 17:03:41 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.621 17:03:41 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.621 17:03:41 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.621 17:03:41 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.621 17:03:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.880 [2024-11-19 17:03:41.173043] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:48.880 [2024-11-19 17:03:41.173196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59718 ] 00:04:48.880 [2024-11-19 17:03:41.322714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.880 [2024-11-19 17:03:41.399549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.815 17:03:42 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.815 17:03:42 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:49.815 17:03:42 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:50.074 17:03:42 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59718 00:04:50.074 17:03:42 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59718 ']' 00:04:50.074 17:03:42 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59718 00:04:50.074 17:03:42 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:50.074 17:03:42 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.074 17:03:42 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59718 00:04:50.074 17:03:42 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.074 17:03:42 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.074 killing process with pid 59718 00:04:50.074 17:03:42 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59718' 00:04:50.074 17:03:42 alias_rpc -- common/autotest_common.sh@973 -- # kill 59718 00:04:50.074 17:03:42 alias_rpc -- common/autotest_common.sh@978 -- # wait 59718 00:04:50.641 00:04:50.641 real 0m2.214s 00:04:50.641 user 0m2.455s 00:04:50.641 sys 0m0.607s 00:04:50.641 17:03:43 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.641 17:03:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.641 ************************************ 00:04:50.641 END TEST alias_rpc 00:04:50.641 ************************************ 00:04:50.641 17:03:43 -- spdk/autotest.sh@163 -- # [[ 1 -eq 0 ]] 00:04:50.641 17:03:43 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:50.641 17:03:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.641 17:03:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.641 17:03:43 -- common/autotest_common.sh@10 -- # set +x 00:04:50.900 ************************************ 00:04:50.900 START TEST dpdk_mem_utility 00:04:50.900 ************************************ 00:04:50.900 17:03:43 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:50.900 * Looking for test storage... 00:04:50.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:50.900 17:03:43 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:50.900 17:03:43 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:50.900 17:03:43 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:50.900 17:03:43 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:50.900 17:03:43 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.900 17:03:43 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.900 17:03:43 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.900 17:03:43 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.900 17:03:43 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.900 17:03:43 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.900 17:03:43 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.900 17:03:43 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.900 17:03:43 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.900 17:03:43 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.900 17:03:43 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.900 17:03:43 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:50.900 17:03:43 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:50.900 17:03:43 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.900 17:03:43 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.900 17:03:43 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:50.900 17:03:43 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:50.900 17:03:43 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.900 17:03:43 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:50.900 17:03:43 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.900 17:03:43 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:50.900 17:03:43 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:50.900 17:03:43 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.900 17:03:43 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:50.900 17:03:43 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.900 17:03:43 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.900 17:03:43 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.900 17:03:43 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:50.900 17:03:43 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.900 17:03:43 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:50.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.900 --rc genhtml_branch_coverage=1 00:04:50.900 --rc genhtml_function_coverage=1 00:04:50.900 --rc genhtml_legend=1 00:04:50.900 --rc geninfo_all_blocks=1 00:04:50.900 --rc geninfo_unexecuted_blocks=1 00:04:50.900 00:04:50.900 ' 00:04:50.900 17:03:43 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:50.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.900 --rc genhtml_branch_coverage=1 00:04:50.900 --rc genhtml_function_coverage=1 00:04:50.900 --rc genhtml_legend=1 00:04:50.900 --rc geninfo_all_blocks=1 00:04:50.900 --rc geninfo_unexecuted_blocks=1 00:04:50.900 00:04:50.900 ' 00:04:50.900 17:03:43 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:50.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.900 --rc genhtml_branch_coverage=1 00:04:50.900 --rc genhtml_function_coverage=1 00:04:50.900 --rc genhtml_legend=1 00:04:50.900 --rc geninfo_all_blocks=1 00:04:50.900 --rc geninfo_unexecuted_blocks=1 00:04:50.900 00:04:50.900 ' 00:04:50.900 17:03:43 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:50.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.900 --rc genhtml_branch_coverage=1 00:04:50.900 --rc genhtml_function_coverage=1 00:04:50.900 --rc genhtml_legend=1 00:04:50.900 --rc geninfo_all_blocks=1 00:04:50.900 --rc geninfo_unexecuted_blocks=1 00:04:50.900 00:04:50.900 ' 00:04:50.900 17:03:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:50.900 17:03:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59819 00:04:50.900 17:03:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:50.900 17:03:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59819 00:04:50.900 17:03:43 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59819 ']' 00:04:50.900 17:03:43 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.900 17:03:43 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.900 17:03:43 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.900 17:03:43 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.900 17:03:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:50.900 [2024-11-19 17:03:43.428516] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:50.900 [2024-11-19 17:03:43.428866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59819 ] 00:04:51.159 [2024-11-19 17:03:43.572648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.159 [2024-11-19 17:03:43.656972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.728 17:03:44 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.728 17:03:44 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:51.728 17:03:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:51.728 17:03:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:51.728 17:03:44 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.728 17:03:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:51.728 { 00:04:51.728 "filename": "/tmp/spdk_mem_dump.txt" 00:04:51.728 } 00:04:51.728 17:03:44 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.728 17:03:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:51.728 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:51.728 1 heaps totaling size 810.000000 MiB 00:04:51.728 size: 810.000000 MiB heap id: 0 00:04:51.728 end heaps---------- 00:04:51.728 9 mempools totaling size 595.772034 MiB 00:04:51.728 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:51.728 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:51.728 size: 92.545471 MiB name: bdev_io_59819 00:04:51.728 size: 50.003479 MiB name: msgpool_59819 00:04:51.728 size: 36.509338 MiB name: fsdev_io_59819 00:04:51.728 size: 21.763794 MiB name: PDU_Pool 00:04:51.728 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:51.728 size: 4.133484 MiB name: evtpool_59819 00:04:51.728 size: 0.026123 MiB name: Session_Pool 00:04:51.728 end mempools------- 00:04:51.728 6 memzones totaling size 4.142822 MiB 00:04:51.728 size: 1.000366 MiB name: RG_ring_0_59819 00:04:51.728 size: 1.000366 MiB name: RG_ring_1_59819 00:04:51.728 size: 1.000366 MiB name: RG_ring_4_59819 00:04:51.728 size: 1.000366 MiB name: RG_ring_5_59819 00:04:51.728 size: 0.125366 MiB name: RG_ring_2_59819 00:04:51.728 size: 0.015991 MiB name: RG_ring_3_59819 00:04:51.728 end memzones------- 00:04:51.728 17:03:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:51.728 heap id: 0 total size: 810.000000 MiB number of busy elements: 215 number of free elements: 15 00:04:51.728 list of free elements. size: 10.831177 MiB 00:04:51.728 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:51.728 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:51.728 element at address: 0x200000400000 with size: 0.996155 MiB 00:04:51.728 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:51.728 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:51.728 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:51.728 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:51.728 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:51.728 element at address: 0x20001a600000 with size: 0.572449 MiB 00:04:51.728 element at address: 0x200000c00000 with size: 0.491211 MiB 00:04:51.728 element at address: 0x20000a600000 with size: 0.489990 MiB 00:04:51.728 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:51.728 element at address: 0x200003e00000 with size: 0.481201 MiB 00:04:51.728 element at address: 0x200027a00000 with size: 0.398865 MiB 00:04:51.728 element at address: 0x200000800000 with size: 0.353394 MiB 00:04:51.728 list of standard malloc elements. size: 199.249939 MiB 00:04:51.728 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:51.728 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:51.728 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:51.728 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:51.728 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:51.728 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:51.728 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:51.728 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:51.728 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:51.728 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:51.728 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:51.728 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:04:51.728 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:04:51.728 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:04:51.728 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:04:51.728 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:04:51.728 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:04:51.728 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:04:51.728 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:04:51.728 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:04:51.728 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:04:51.728 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:04:51.728 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:04:51.728 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:04:51.728 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:51.728 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:51.728 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:04:51.728 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:51.728 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:51.728 element at address: 0x20000085a780 with size: 0.000183 MiB 00:04:51.728 element at address: 0x20000085a980 with size: 0.000183 MiB 00:04:51.728 element at address: 0x20000085ec40 with size: 0.000183 MiB 00:04:51.728 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:04:51.728 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:04:51.728 element at address: 0x20000087f080 with size: 0.000183 MiB 00:04:51.728 element at address: 0x20000087f140 with size: 0.000183 MiB 00:04:51.728 element at address: 0x20000087f200 with size: 0.000183 MiB 00:04:51.728 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:04:51.728 element at address: 0x20000087f380 with size: 0.000183 MiB 00:04:51.728 element at address: 0x20000087f440 with size: 0.000183 MiB 00:04:51.728 element at address: 0x20000087f500 with size: 0.000183 MiB 00:04:51.728 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:51.728 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:51.728 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:51.728 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:51.728 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:04:51.728 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:04:51.728 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:04:51.728 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:04:51.728 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:04:51.728 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:04:51.728 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:04:51.728 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:04:51.728 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:04:51.728 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:04:51.728 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:04:51.728 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:04:51.728 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:04:51.728 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:04:51.728 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:04:51.728 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:04:51.728 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:04:51.728 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:04:51.728 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:04:51.728 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:04:51.728 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:04:51.728 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:04:51.728 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:04:51.728 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:51.728 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:51.728 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:51.728 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:04:51.728 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:51.729 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:51.729 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a692980 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a693040 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a693100 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a693280 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a693340 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a693400 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a693580 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a693640 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a693700 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a693880 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a693940 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a694000 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a694180 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a694240 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a694300 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a694480 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a694540 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a694600 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a694780 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a694840 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a694900 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a695080 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a695140 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a695200 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:51.729 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a661c0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a66280 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6ce80 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:51.729 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:51.729 list of memzone associated elements. size: 599.918884 MiB 00:04:51.729 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:51.729 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:51.729 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:51.729 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:51.729 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:51.729 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_59819_0 00:04:51.729 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:51.730 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59819_0 00:04:51.730 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:51.730 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59819_0 00:04:51.730 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:51.730 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:51.730 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:51.730 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:51.730 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:51.730 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59819_0 00:04:51.730 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:51.730 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59819 00:04:51.730 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:51.730 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59819 00:04:51.730 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:51.730 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:51.730 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:51.730 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:51.730 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:51.730 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:51.730 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:51.730 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:51.730 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:51.730 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59819 00:04:51.730 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:51.730 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59819 00:04:51.730 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:51.730 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59819 00:04:51.730 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:51.730 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59819 00:04:51.730 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:51.730 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59819 00:04:51.730 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:51.730 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59819 00:04:51.730 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:51.730 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:51.730 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:51.730 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:51.730 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:51.730 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:51.730 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:51.730 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59819 00:04:51.730 element at address: 0x20000085ed00 with size: 0.125488 MiB 00:04:51.730 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59819 00:04:51.730 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:51.730 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:51.730 element at address: 0x200027a66340 with size: 0.023743 MiB 00:04:51.730 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:51.730 element at address: 0x20000085aa40 with size: 0.016113 MiB 00:04:51.730 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59819 00:04:51.730 element at address: 0x200027a6c480 with size: 0.002441 MiB 00:04:51.730 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:51.730 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:04:51.730 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59819 00:04:51.730 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:51.730 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59819 00:04:51.730 element at address: 0x20000085a840 with size: 0.000305 MiB 00:04:51.730 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59819 00:04:51.730 element at address: 0x200027a6cf40 with size: 0.000305 MiB 00:04:51.730 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:51.730 17:03:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:51.730 17:03:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59819 00:04:51.730 17:03:44 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59819 ']' 00:04:51.730 17:03:44 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59819 00:04:51.730 17:03:44 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:51.730 17:03:44 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.730 17:03:44 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59819 00:04:51.730 17:03:44 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.730 17:03:44 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.730 killing process with pid 59819 00:04:51.730 17:03:44 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59819' 00:04:51.730 17:03:44 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59819 00:04:51.730 17:03:44 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59819 00:04:52.297 ************************************ 00:04:52.297 END TEST dpdk_mem_utility 00:04:52.297 ************************************ 00:04:52.297 00:04:52.297 real 0m1.565s 00:04:52.297 user 0m1.394s 00:04:52.297 sys 0m0.525s 00:04:52.297 17:03:44 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.297 17:03:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:52.297 17:03:44 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:52.297 17:03:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.297 17:03:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.297 17:03:44 -- common/autotest_common.sh@10 -- # set +x 00:04:52.297 ************************************ 00:04:52.297 START TEST event 00:04:52.297 ************************************ 00:04:52.297 17:03:44 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:52.556 * Looking for test storage... 00:04:52.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:52.556 17:03:44 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:52.556 17:03:44 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:52.556 17:03:44 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:52.556 17:03:44 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:52.556 17:03:44 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.556 17:03:44 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.556 17:03:44 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.556 17:03:44 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.556 17:03:44 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.556 17:03:44 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.556 17:03:44 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.556 17:03:44 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.556 17:03:44 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.556 17:03:44 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.556 17:03:44 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.556 17:03:44 event -- scripts/common.sh@344 -- # case "$op" in 00:04:52.556 17:03:44 event -- scripts/common.sh@345 -- # : 1 00:04:52.556 17:03:44 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.556 17:03:44 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.556 17:03:44 event -- scripts/common.sh@365 -- # decimal 1 00:04:52.556 17:03:44 event -- scripts/common.sh@353 -- # local d=1 00:04:52.556 17:03:44 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.556 17:03:44 event -- scripts/common.sh@355 -- # echo 1 00:04:52.556 17:03:44 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.556 17:03:44 event -- scripts/common.sh@366 -- # decimal 2 00:04:52.556 17:03:44 event -- scripts/common.sh@353 -- # local d=2 00:04:52.556 17:03:44 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.556 17:03:44 event -- scripts/common.sh@355 -- # echo 2 00:04:52.556 17:03:44 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.556 17:03:44 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.556 17:03:44 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.556 17:03:44 event -- scripts/common.sh@368 -- # return 0 00:04:52.556 17:03:44 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.556 17:03:44 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:52.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.556 --rc genhtml_branch_coverage=1 00:04:52.556 --rc genhtml_function_coverage=1 00:04:52.556 --rc genhtml_legend=1 00:04:52.556 --rc geninfo_all_blocks=1 00:04:52.556 --rc geninfo_unexecuted_blocks=1 00:04:52.556 00:04:52.556 ' 00:04:52.556 17:03:44 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:52.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.556 --rc genhtml_branch_coverage=1 00:04:52.556 --rc genhtml_function_coverage=1 00:04:52.556 --rc genhtml_legend=1 00:04:52.556 --rc geninfo_all_blocks=1 00:04:52.556 --rc geninfo_unexecuted_blocks=1 00:04:52.556 00:04:52.556 ' 00:04:52.556 17:03:44 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:52.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.556 --rc genhtml_branch_coverage=1 00:04:52.556 --rc genhtml_function_coverage=1 00:04:52.556 --rc genhtml_legend=1 00:04:52.556 --rc geninfo_all_blocks=1 00:04:52.556 --rc geninfo_unexecuted_blocks=1 00:04:52.556 00:04:52.556 ' 00:04:52.556 17:03:44 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:52.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.556 --rc genhtml_branch_coverage=1 00:04:52.556 --rc genhtml_function_coverage=1 00:04:52.556 --rc genhtml_legend=1 00:04:52.556 --rc geninfo_all_blocks=1 00:04:52.556 --rc geninfo_unexecuted_blocks=1 00:04:52.556 00:04:52.556 ' 00:04:52.556 17:03:44 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:52.556 17:03:44 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:52.556 17:03:44 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:52.556 17:03:44 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:52.556 17:03:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.556 17:03:44 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.556 ************************************ 00:04:52.556 START TEST event_perf 00:04:52.556 ************************************ 00:04:52.556 17:03:45 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:52.556 Running I/O for 1 seconds...[2024-11-19 17:03:45.024994] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:52.556 [2024-11-19 17:03:45.025120] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59908 ] 00:04:52.815 [2024-11-19 17:03:45.173853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:52.815 [2024-11-19 17:03:45.253238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.815 [2024-11-19 17:03:45.253395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:52.815 [2024-11-19 17:03:45.253534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:52.815 [2024-11-19 17:03:45.253542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.268 Running I/O for 1 seconds... 00:04:54.268 lcore 0: 119655 00:04:54.268 lcore 1: 119656 00:04:54.268 lcore 2: 119656 00:04:54.268 lcore 3: 119656 00:04:54.268 done. 00:04:54.268 00:04:54.268 real 0m1.319s 00:04:54.268 user 0m4.137s 00:04:54.268 sys 0m0.059s 00:04:54.268 17:03:46 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.268 17:03:46 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:54.268 ************************************ 00:04:54.268 END TEST event_perf 00:04:54.268 ************************************ 00:04:54.268 17:03:46 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:54.268 17:03:46 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:54.268 17:03:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.268 17:03:46 event -- common/autotest_common.sh@10 -- # set +x 00:04:54.268 ************************************ 00:04:54.268 START TEST event_reactor 00:04:54.268 ************************************ 00:04:54.268 17:03:46 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:54.268 [2024-11-19 17:03:46.405291] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:54.268 [2024-11-19 17:03:46.405560] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59947 ] 00:04:54.268 [2024-11-19 17:03:46.552143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.268 [2024-11-19 17:03:46.616247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.203 test_start 00:04:55.203 oneshot 00:04:55.203 tick 100 00:04:55.203 tick 100 00:04:55.203 tick 250 00:04:55.203 tick 100 00:04:55.203 tick 100 00:04:55.203 tick 250 00:04:55.203 tick 100 00:04:55.203 tick 500 00:04:55.203 tick 100 00:04:55.203 tick 100 00:04:55.203 tick 250 00:04:55.203 tick 100 00:04:55.203 tick 100 00:04:55.203 test_end 00:04:55.203 00:04:55.203 real 0m1.293s 00:04:55.203 user 0m1.135s 00:04:55.203 sys 0m0.052s 00:04:55.203 ************************************ 00:04:55.203 END TEST event_reactor 00:04:55.203 ************************************ 00:04:55.203 17:03:47 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.203 17:03:47 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:55.203 17:03:47 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:55.203 17:03:47 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:55.203 17:03:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.203 17:03:47 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.203 ************************************ 00:04:55.203 START TEST event_reactor_perf 00:04:55.203 ************************************ 00:04:55.203 17:03:47 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:55.462 [2024-11-19 17:03:47.749722] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:55.462 [2024-11-19 17:03:47.749829] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59977 ] 00:04:55.462 [2024-11-19 17:03:47.887716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.462 [2024-11-19 17:03:47.950961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.837 test_start 00:04:56.837 test_end 00:04:56.837 Performance: 389626 events per second 00:04:56.837 ************************************ 00:04:56.837 END TEST event_reactor_perf 00:04:56.837 ************************************ 00:04:56.837 00:04:56.837 real 0m1.298s 00:04:56.837 user 0m1.146s 00:04:56.837 sys 0m0.045s 00:04:56.837 17:03:49 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.837 17:03:49 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:56.837 17:03:49 event -- event/event.sh@49 -- # uname -s 00:04:56.837 17:03:49 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:56.837 17:03:49 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:56.837 17:03:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.837 17:03:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.837 17:03:49 event -- common/autotest_common.sh@10 -- # set +x 00:04:56.837 ************************************ 00:04:56.837 START TEST event_scheduler 00:04:56.837 ************************************ 00:04:56.837 17:03:49 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:56.837 * Looking for test storage... 00:04:56.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:56.837 17:03:49 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:56.837 17:03:49 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:56.837 17:03:49 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:56.837 17:03:49 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:56.837 17:03:49 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.837 17:03:49 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.837 17:03:49 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.837 17:03:49 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.837 17:03:49 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.837 17:03:49 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.837 17:03:49 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.837 17:03:49 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.837 17:03:49 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.837 17:03:49 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.837 17:03:49 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.837 17:03:49 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:56.837 17:03:49 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:56.837 17:03:49 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.837 17:03:49 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.837 17:03:49 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:56.837 17:03:49 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:56.837 17:03:49 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.837 17:03:49 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:56.837 17:03:49 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.837 17:03:49 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:56.837 17:03:49 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:56.837 17:03:49 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.837 17:03:49 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:56.837 17:03:49 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.837 17:03:49 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.837 17:03:49 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.837 17:03:49 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:56.837 17:03:49 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.837 17:03:49 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:56.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.837 --rc genhtml_branch_coverage=1 00:04:56.837 --rc genhtml_function_coverage=1 00:04:56.837 --rc genhtml_legend=1 00:04:56.837 --rc geninfo_all_blocks=1 00:04:56.837 --rc geninfo_unexecuted_blocks=1 00:04:56.837 00:04:56.837 ' 00:04:56.837 17:03:49 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:56.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.837 --rc genhtml_branch_coverage=1 00:04:56.837 --rc genhtml_function_coverage=1 00:04:56.837 --rc genhtml_legend=1 00:04:56.837 --rc geninfo_all_blocks=1 00:04:56.837 --rc geninfo_unexecuted_blocks=1 00:04:56.837 00:04:56.837 ' 00:04:56.837 17:03:49 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:56.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.837 --rc genhtml_branch_coverage=1 00:04:56.837 --rc genhtml_function_coverage=1 00:04:56.837 --rc genhtml_legend=1 00:04:56.837 --rc geninfo_all_blocks=1 00:04:56.837 --rc geninfo_unexecuted_blocks=1 00:04:56.837 00:04:56.837 ' 00:04:56.837 17:03:49 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:56.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.837 --rc genhtml_branch_coverage=1 00:04:56.837 --rc genhtml_function_coverage=1 00:04:56.837 --rc genhtml_legend=1 00:04:56.838 --rc geninfo_all_blocks=1 00:04:56.838 --rc geninfo_unexecuted_blocks=1 00:04:56.838 00:04:56.838 ' 00:04:56.838 17:03:49 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:56.838 17:03:49 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60052 00:04:56.838 17:03:49 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:56.838 17:03:49 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.838 17:03:49 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60052 00:04:56.838 17:03:49 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 60052 ']' 00:04:56.838 17:03:49 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.838 17:03:49 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.838 17:03:49 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.838 17:03:49 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.838 17:03:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:57.095 [2024-11-19 17:03:49.396284] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:57.095 [2024-11-19 17:03:49.396722] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60052 ] 00:04:57.095 [2024-11-19 17:03:49.550979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:57.355 [2024-11-19 17:03:49.628820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.355 [2024-11-19 17:03:49.628992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:57.355 [2024-11-19 17:03:49.628851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.355 [2024-11-19 17:03:49.628999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:57.355 17:03:49 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.355 17:03:49 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:57.355 17:03:49 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:57.355 17:03:49 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.355 17:03:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:57.355 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:57.355 POWER: Cannot set governor of lcore 0 to userspace 00:04:57.355 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:57.355 POWER: Cannot set governor of lcore 0 to performance 00:04:57.355 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:57.355 POWER: Cannot set governor of lcore 0 to userspace 00:04:57.355 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:57.355 POWER: Cannot set governor of lcore 0 to userspace 00:04:57.355 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:57.355 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:57.355 POWER: Unable to set Power Management Environment for lcore 0 00:04:57.355 [2024-11-19 17:03:49.690029] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:57.355 [2024-11-19 17:03:49.690379] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:57.355 [2024-11-19 17:03:49.690821] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:57.355 [2024-11-19 17:03:49.691037] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:57.355 [2024-11-19 17:03:49.691350] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:57.355 [2024-11-19 17:03:49.691626] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:57.355 17:03:49 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.355 17:03:49 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:57.355 17:03:49 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.355 17:03:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:57.355 [2024-11-19 17:03:49.793432] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:57.355 17:03:49 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.355 17:03:49 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:57.355 17:03:49 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.355 17:03:49 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.355 17:03:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:57.355 ************************************ 00:04:57.355 START TEST scheduler_create_thread 00:04:57.355 ************************************ 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.355 2 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.355 3 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.355 4 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.355 5 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.355 6 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.355 7 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.355 8 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.355 9 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.355 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.614 10 00:04:57.614 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.614 17:03:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:57.614 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.614 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.614 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.614 17:03:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:57.614 17:03:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:57.614 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.614 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.614 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.614 17:03:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:57.614 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.614 17:03:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.987 17:03:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.987 17:03:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:58.988 17:03:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:58.988 17:03:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.988 17:03:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.922 17:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.922 00:04:59.922 real 0m2.617s 00:04:59.922 user 0m0.011s 00:04:59.922 sys 0m0.008s 00:04:59.922 ************************************ 00:04:59.922 END TEST scheduler_create_thread 00:04:59.922 ************************************ 00:04:59.922 17:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.922 17:03:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.180 17:03:52 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:00.180 17:03:52 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60052 00:05:00.180 17:03:52 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 60052 ']' 00:05:00.180 17:03:52 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 60052 00:05:00.180 17:03:52 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:00.180 17:03:52 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.180 17:03:52 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60052 00:05:00.180 killing process with pid 60052 00:05:00.180 17:03:52 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:00.180 17:03:52 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:00.180 17:03:52 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60052' 00:05:00.180 17:03:52 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 60052 00:05:00.180 17:03:52 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 60052 00:05:00.438 [2024-11-19 17:03:52.902325] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:00.697 00:05:00.697 real 0m4.031s 00:05:00.697 user 0m5.868s 00:05:00.697 sys 0m0.369s 00:05:00.697 ************************************ 00:05:00.697 END TEST event_scheduler 00:05:00.697 ************************************ 00:05:00.697 17:03:53 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.697 17:03:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.697 17:03:53 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:00.697 17:03:53 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:00.697 17:03:53 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.697 17:03:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.697 17:03:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.697 ************************************ 00:05:00.697 START TEST app_repeat 00:05:00.697 ************************************ 00:05:00.697 17:03:53 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:00.697 17:03:53 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.697 17:03:53 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.697 17:03:53 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:00.697 17:03:53 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.697 17:03:53 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:00.697 17:03:53 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:00.697 17:03:53 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:00.697 17:03:53 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60144 00:05:00.697 Process app_repeat pid: 60144 00:05:00.697 spdk_app_start Round 0 00:05:00.697 17:03:53 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.697 17:03:53 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:00.697 17:03:53 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60144' 00:05:00.697 17:03:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:00.697 17:03:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:00.697 17:03:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60144 /var/tmp/spdk-nbd.sock 00:05:00.697 17:03:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60144 ']' 00:05:00.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.697 17:03:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.697 17:03:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.697 17:03:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.697 17:03:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.697 17:03:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:00.697 [2024-11-19 17:03:53.217620] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:00.697 [2024-11-19 17:03:53.217729] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60144 ] 00:05:00.955 [2024-11-19 17:03:53.361797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:00.955 [2024-11-19 17:03:53.443770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.955 [2024-11-19 17:03:53.443770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.214 17:03:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.214 17:03:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:01.214 17:03:53 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.472 Malloc0 00:05:01.472 17:03:53 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.737 Malloc1 00:05:01.737 17:03:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.737 17:03:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.737 17:03:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.737 17:03:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:01.737 17:03:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.737 17:03:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:01.737 17:03:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.737 17:03:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.737 17:03:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.737 17:03:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:01.737 17:03:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.737 17:03:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:01.737 17:03:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:01.737 17:03:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:01.737 17:03:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.737 17:03:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:02.341 /dev/nbd0 00:05:02.341 17:03:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:02.341 17:03:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:02.341 17:03:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:02.341 17:03:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:02.341 17:03:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:02.341 17:03:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:02.341 17:03:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:02.341 17:03:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:02.341 17:03:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:02.341 17:03:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:02.341 17:03:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.341 1+0 records in 00:05:02.341 1+0 records out 00:05:02.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344035 s, 11.9 MB/s 00:05:02.341 17:03:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.341 17:03:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:02.341 17:03:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.341 17:03:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:02.341 17:03:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:02.341 17:03:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.341 17:03:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.341 17:03:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:02.598 /dev/nbd1 00:05:02.598 17:03:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:02.598 17:03:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:02.598 17:03:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:02.598 17:03:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:02.598 17:03:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:02.598 17:03:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:02.598 17:03:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:02.598 17:03:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:02.598 17:03:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:02.598 17:03:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:02.598 17:03:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.598 1+0 records in 00:05:02.598 1+0 records out 00:05:02.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413483 s, 9.9 MB/s 00:05:02.598 17:03:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.598 17:03:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:02.598 17:03:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.598 17:03:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:02.598 17:03:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:02.598 17:03:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.598 17:03:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.598 17:03:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.598 17:03:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.598 17:03:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.855 17:03:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:02.855 { 00:05:02.855 "bdev_name": "Malloc0", 00:05:02.855 "nbd_device": "/dev/nbd0" 00:05:02.855 }, 00:05:02.855 { 00:05:02.855 "bdev_name": "Malloc1", 00:05:02.855 "nbd_device": "/dev/nbd1" 00:05:02.855 } 00:05:02.855 ]' 00:05:02.856 17:03:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.856 17:03:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:02.856 { 00:05:02.856 "bdev_name": "Malloc0", 00:05:02.856 "nbd_device": "/dev/nbd0" 00:05:02.856 }, 00:05:02.856 { 00:05:02.856 "bdev_name": "Malloc1", 00:05:02.856 "nbd_device": "/dev/nbd1" 00:05:02.856 } 00:05:02.856 ]' 00:05:02.856 17:03:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:02.856 /dev/nbd1' 00:05:02.856 17:03:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:02.856 /dev/nbd1' 00:05:02.856 17:03:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.856 17:03:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:02.856 17:03:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:02.856 17:03:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:02.856 17:03:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:02.856 17:03:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:02.856 17:03:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.856 17:03:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.856 17:03:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:02.856 17:03:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.856 17:03:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:02.856 17:03:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:02.856 256+0 records in 00:05:02.856 256+0 records out 00:05:02.856 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00539584 s, 194 MB/s 00:05:02.856 17:03:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.856 17:03:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:03.114 256+0 records in 00:05:03.114 256+0 records out 00:05:03.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0266466 s, 39.4 MB/s 00:05:03.114 17:03:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.114 17:03:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:03.114 256+0 records in 00:05:03.114 256+0 records out 00:05:03.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0295655 s, 35.5 MB/s 00:05:03.114 17:03:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:03.114 17:03:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.114 17:03:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.114 17:03:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:03.114 17:03:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:03.114 17:03:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:03.114 17:03:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:03.114 17:03:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.114 17:03:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:03.114 17:03:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.114 17:03:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:03.114 17:03:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:03.114 17:03:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:03.114 17:03:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.114 17:03:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.114 17:03:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:03.114 17:03:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:03.114 17:03:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.114 17:03:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:03.372 17:03:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:03.372 17:03:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:03.372 17:03:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:03.372 17:03:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.372 17:03:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.372 17:03:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:03.372 17:03:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.372 17:03:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.372 17:03:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.372 17:03:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:03.630 17:03:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:03.630 17:03:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:03.630 17:03:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:03.630 17:03:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.630 17:03:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.630 17:03:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:03.630 17:03:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.630 17:03:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.630 17:03:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.630 17:03:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.630 17:03:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.888 17:03:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:03.888 17:03:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:03.888 17:03:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.888 17:03:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:03.888 17:03:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:03.888 17:03:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.888 17:03:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:03.888 17:03:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:03.888 17:03:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:03.888 17:03:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:03.888 17:03:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:03.888 17:03:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:03.888 17:03:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:04.459 17:03:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:04.718 [2024-11-19 17:03:57.040185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.718 [2024-11-19 17:03:57.126442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.718 [2024-11-19 17:03:57.126462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.718 [2024-11-19 17:03:57.211988] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:04.718 [2024-11-19 17:03:57.212074] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:07.249 17:03:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:07.249 spdk_app_start Round 1 00:05:07.249 17:03:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:07.249 17:03:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60144 /var/tmp/spdk-nbd.sock 00:05:07.249 17:03:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60144 ']' 00:05:07.249 17:03:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:07.249 17:03:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:07.249 17:03:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:07.249 17:03:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.249 17:03:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.816 17:04:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.816 17:04:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:07.816 17:04:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.075 Malloc0 00:05:08.075 17:04:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.334 Malloc1 00:05:08.334 17:04:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:08.334 17:04:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.334 17:04:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.334 17:04:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:08.334 17:04:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.334 17:04:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:08.334 17:04:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:08.334 17:04:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.334 17:04:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.334 17:04:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:08.334 17:04:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.334 17:04:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:08.334 17:04:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:08.334 17:04:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:08.334 17:04:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.334 17:04:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:08.593 /dev/nbd0 00:05:08.593 17:04:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:08.593 17:04:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:08.593 17:04:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:08.593 17:04:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:08.593 17:04:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:08.593 17:04:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:08.593 17:04:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:08.593 17:04:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:08.593 17:04:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:08.593 17:04:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:08.593 17:04:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.593 1+0 records in 00:05:08.593 1+0 records out 00:05:08.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400443 s, 10.2 MB/s 00:05:08.593 17:04:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.593 17:04:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:08.593 17:04:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.593 17:04:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:08.593 17:04:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:08.593 17:04:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.593 17:04:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.593 17:04:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:08.852 /dev/nbd1 00:05:08.852 17:04:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:08.852 17:04:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:08.852 17:04:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:08.852 17:04:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:08.852 17:04:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:08.852 17:04:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:08.852 17:04:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:08.852 17:04:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:08.852 17:04:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:08.852 17:04:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:08.852 17:04:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.852 1+0 records in 00:05:08.852 1+0 records out 00:05:08.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447713 s, 9.1 MB/s 00:05:08.852 17:04:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.852 17:04:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:08.852 17:04:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.852 17:04:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:08.852 17:04:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:08.852 17:04:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.852 17:04:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.852 17:04:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:08.852 17:04:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.852 17:04:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.419 17:04:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:09.419 { 00:05:09.419 "bdev_name": "Malloc0", 00:05:09.419 "nbd_device": "/dev/nbd0" 00:05:09.419 }, 00:05:09.419 { 00:05:09.419 "bdev_name": "Malloc1", 00:05:09.419 "nbd_device": "/dev/nbd1" 00:05:09.419 } 00:05:09.419 ]' 00:05:09.419 17:04:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:09.419 { 00:05:09.419 "bdev_name": "Malloc0", 00:05:09.419 "nbd_device": "/dev/nbd0" 00:05:09.419 }, 00:05:09.419 { 00:05:09.419 "bdev_name": "Malloc1", 00:05:09.419 "nbd_device": "/dev/nbd1" 00:05:09.419 } 00:05:09.419 ]' 00:05:09.419 17:04:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.419 17:04:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:09.419 /dev/nbd1' 00:05:09.419 17:04:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:09.419 /dev/nbd1' 00:05:09.419 17:04:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.419 17:04:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:09.419 17:04:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:09.419 17:04:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:09.419 17:04:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:09.419 17:04:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:09.419 17:04:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.419 17:04:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.419 17:04:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:09.419 17:04:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:09.419 17:04:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:09.419 17:04:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:09.419 256+0 records in 00:05:09.419 256+0 records out 00:05:09.419 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00782029 s, 134 MB/s 00:05:09.419 17:04:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.419 17:04:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:09.419 256+0 records in 00:05:09.419 256+0 records out 00:05:09.419 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265001 s, 39.6 MB/s 00:05:09.419 17:04:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.419 17:04:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:09.419 256+0 records in 00:05:09.419 256+0 records out 00:05:09.419 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0292547 s, 35.8 MB/s 00:05:09.420 17:04:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:09.420 17:04:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.420 17:04:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.420 17:04:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:09.420 17:04:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:09.420 17:04:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:09.420 17:04:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:09.420 17:04:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.420 17:04:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:09.420 17:04:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.420 17:04:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:09.420 17:04:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:09.420 17:04:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:09.420 17:04:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.420 17:04:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.420 17:04:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:09.420 17:04:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:09.420 17:04:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.420 17:04:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:09.679 17:04:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:09.679 17:04:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:09.679 17:04:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:09.679 17:04:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.679 17:04:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.679 17:04:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:09.679 17:04:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:09.679 17:04:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.679 17:04:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.679 17:04:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:10.246 17:04:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:10.246 17:04:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:10.246 17:04:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:10.246 17:04:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.246 17:04:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.246 17:04:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:10.246 17:04:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:10.246 17:04:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.246 17:04:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.246 17:04:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.246 17:04:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.505 17:04:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:10.505 17:04:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:10.505 17:04:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.505 17:04:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:10.505 17:04:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:10.505 17:04:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.505 17:04:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:10.505 17:04:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:10.505 17:04:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:10.505 17:04:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:10.505 17:04:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:10.505 17:04:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:10.505 17:04:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:10.764 17:04:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:11.023 [2024-11-19 17:04:03.518593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.281 [2024-11-19 17:04:03.598065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.281 [2024-11-19 17:04:03.598085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.282 [2024-11-19 17:04:03.673422] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:11.282 [2024-11-19 17:04:03.673501] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:13.815 17:04:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:13.815 spdk_app_start Round 2 00:05:13.815 17:04:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:13.815 17:04:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60144 /var/tmp/spdk-nbd.sock 00:05:13.815 17:04:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60144 ']' 00:05:13.815 17:04:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:13.815 17:04:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:13.815 17:04:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:13.815 17:04:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.815 17:04:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:14.074 17:04:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.074 17:04:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:14.074 17:04:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.642 Malloc0 00:05:14.642 17:04:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.901 Malloc1 00:05:14.901 17:04:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.901 17:04:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.901 17:04:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.901 17:04:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:14.901 17:04:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.901 17:04:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:14.901 17:04:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.901 17:04:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.901 17:04:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.901 17:04:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:14.901 17:04:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.901 17:04:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:14.901 17:04:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:14.901 17:04:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:14.901 17:04:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.901 17:04:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:15.160 /dev/nbd0 00:05:15.160 17:04:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:15.160 17:04:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:15.160 17:04:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:15.160 17:04:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.160 17:04:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.160 17:04:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.160 17:04:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:15.160 17:04:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.160 17:04:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.160 17:04:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.160 17:04:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.160 1+0 records in 00:05:15.160 1+0 records out 00:05:15.160 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030033 s, 13.6 MB/s 00:05:15.160 17:04:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.160 17:04:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.160 17:04:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.160 17:04:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.160 17:04:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.160 17:04:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.160 17:04:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.160 17:04:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:15.419 /dev/nbd1 00:05:15.419 17:04:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:15.419 17:04:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:15.419 17:04:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:15.419 17:04:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.419 17:04:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.419 17:04:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.419 17:04:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:15.419 17:04:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.419 17:04:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.419 17:04:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.419 17:04:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.419 1+0 records in 00:05:15.419 1+0 records out 00:05:15.419 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042055 s, 9.7 MB/s 00:05:15.419 17:04:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.419 17:04:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.419 17:04:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.419 17:04:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.419 17:04:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.419 17:04:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.419 17:04:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.419 17:04:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.419 17:04:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.419 17:04:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:15.987 { 00:05:15.987 "bdev_name": "Malloc0", 00:05:15.987 "nbd_device": "/dev/nbd0" 00:05:15.987 }, 00:05:15.987 { 00:05:15.987 "bdev_name": "Malloc1", 00:05:15.987 "nbd_device": "/dev/nbd1" 00:05:15.987 } 00:05:15.987 ]' 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:15.987 { 00:05:15.987 "bdev_name": "Malloc0", 00:05:15.987 "nbd_device": "/dev/nbd0" 00:05:15.987 }, 00:05:15.987 { 00:05:15.987 "bdev_name": "Malloc1", 00:05:15.987 "nbd_device": "/dev/nbd1" 00:05:15.987 } 00:05:15.987 ]' 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:15.987 /dev/nbd1' 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:15.987 /dev/nbd1' 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:15.987 256+0 records in 00:05:15.987 256+0 records out 00:05:15.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102594 s, 102 MB/s 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:15.987 256+0 records in 00:05:15.987 256+0 records out 00:05:15.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260081 s, 40.3 MB/s 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:15.987 256+0 records in 00:05:15.987 256+0 records out 00:05:15.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0313918 s, 33.4 MB/s 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.987 17:04:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:16.247 17:04:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:16.247 17:04:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:16.247 17:04:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:16.247 17:04:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.247 17:04:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.247 17:04:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:16.247 17:04:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.247 17:04:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.247 17:04:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.247 17:04:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:16.506 17:04:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:16.506 17:04:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:16.507 17:04:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:16.507 17:04:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.507 17:04:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.507 17:04:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:16.507 17:04:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.507 17:04:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.507 17:04:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.507 17:04:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.507 17:04:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.765 17:04:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:16.765 17:04:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:16.765 17:04:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.059 17:04:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:17.059 17:04:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:17.059 17:04:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.059 17:04:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:17.059 17:04:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:17.059 17:04:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:17.059 17:04:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:17.059 17:04:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:17.059 17:04:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:17.059 17:04:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:17.340 17:04:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:17.340 [2024-11-19 17:04:09.855286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.597 [2024-11-19 17:04:09.934699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.597 [2024-11-19 17:04:09.934730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.597 [2024-11-19 17:04:10.012082] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:17.597 [2024-11-19 17:04:10.012162] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:20.128 17:04:12 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60144 /var/tmp/spdk-nbd.sock 00:05:20.128 17:04:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60144 ']' 00:05:20.128 17:04:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:20.128 17:04:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:20.128 17:04:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:20.128 17:04:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.128 17:04:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:20.387 17:04:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.387 17:04:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:20.387 17:04:12 event.app_repeat -- event/event.sh@39 -- # killprocess 60144 00:05:20.387 17:04:12 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 60144 ']' 00:05:20.387 17:04:12 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 60144 00:05:20.387 17:04:12 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:20.646 17:04:12 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.646 17:04:12 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60144 00:05:20.646 17:04:12 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.646 17:04:12 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.646 killing process with pid 60144 00:05:20.646 17:04:12 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60144' 00:05:20.646 17:04:12 event.app_repeat -- common/autotest_common.sh@973 -- # kill 60144 00:05:20.646 17:04:12 event.app_repeat -- common/autotest_common.sh@978 -- # wait 60144 00:05:20.904 spdk_app_start is called in Round 0. 00:05:20.904 Shutdown signal received, stop current app iteration 00:05:20.904 Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 reinitialization... 00:05:20.904 spdk_app_start is called in Round 1. 00:05:20.904 Shutdown signal received, stop current app iteration 00:05:20.904 Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 reinitialization... 00:05:20.904 spdk_app_start is called in Round 2. 00:05:20.904 Shutdown signal received, stop current app iteration 00:05:20.904 Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 reinitialization... 00:05:20.904 spdk_app_start is called in Round 3. 00:05:20.904 Shutdown signal received, stop current app iteration 00:05:20.904 17:04:13 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:20.904 17:04:13 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:20.904 00:05:20.904 real 0m20.007s 00:05:20.904 user 0m45.498s 00:05:20.904 sys 0m3.274s 00:05:20.904 17:04:13 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.904 ************************************ 00:05:20.904 END TEST app_repeat 00:05:20.904 17:04:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:20.904 ************************************ 00:05:20.904 17:04:13 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:20.904 17:04:13 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:20.904 17:04:13 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.904 17:04:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.904 17:04:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.904 ************************************ 00:05:20.904 START TEST cpu_locks 00:05:20.904 ************************************ 00:05:20.904 17:04:13 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:20.904 * Looking for test storage... 00:05:20.904 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:20.904 17:04:13 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:20.904 17:04:13 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:20.904 17:04:13 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:20.904 17:04:13 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:20.904 17:04:13 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.904 17:04:13 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.904 17:04:13 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.904 17:04:13 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.904 17:04:13 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.904 17:04:13 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.904 17:04:13 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.904 17:04:13 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.904 17:04:13 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.904 17:04:13 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.904 17:04:13 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.904 17:04:13 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:20.904 17:04:13 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:20.904 17:04:13 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.904 17:04:13 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.904 17:04:13 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:20.904 17:04:13 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:20.904 17:04:13 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.904 17:04:13 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:20.904 17:04:13 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.904 17:04:13 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:20.904 17:04:13 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:20.904 17:04:13 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.904 17:04:13 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:20.904 17:04:13 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.904 17:04:13 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.904 17:04:13 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.904 17:04:13 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:20.904 17:04:13 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.904 17:04:13 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:20.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.904 --rc genhtml_branch_coverage=1 00:05:20.904 --rc genhtml_function_coverage=1 00:05:20.904 --rc genhtml_legend=1 00:05:20.904 --rc geninfo_all_blocks=1 00:05:20.904 --rc geninfo_unexecuted_blocks=1 00:05:20.904 00:05:20.904 ' 00:05:20.904 17:04:13 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:20.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.904 --rc genhtml_branch_coverage=1 00:05:20.904 --rc genhtml_function_coverage=1 00:05:20.904 --rc genhtml_legend=1 00:05:20.904 --rc geninfo_all_blocks=1 00:05:20.904 --rc geninfo_unexecuted_blocks=1 00:05:20.904 00:05:20.904 ' 00:05:20.904 17:04:13 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:20.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.904 --rc genhtml_branch_coverage=1 00:05:20.904 --rc genhtml_function_coverage=1 00:05:20.904 --rc genhtml_legend=1 00:05:20.904 --rc geninfo_all_blocks=1 00:05:20.904 --rc geninfo_unexecuted_blocks=1 00:05:20.904 00:05:20.904 ' 00:05:20.904 17:04:13 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:20.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.905 --rc genhtml_branch_coverage=1 00:05:20.905 --rc genhtml_function_coverage=1 00:05:20.905 --rc genhtml_legend=1 00:05:20.905 --rc geninfo_all_blocks=1 00:05:20.905 --rc geninfo_unexecuted_blocks=1 00:05:20.905 00:05:20.905 ' 00:05:20.905 17:04:13 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:20.905 17:04:13 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:20.905 17:04:13 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:20.905 17:04:13 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:20.905 17:04:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.905 17:04:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.905 17:04:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.163 ************************************ 00:05:21.163 START TEST default_locks 00:05:21.163 ************************************ 00:05:21.163 17:04:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:21.163 17:04:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60785 00:05:21.163 17:04:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.163 17:04:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60785 00:05:21.163 17:04:13 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60785 ']' 00:05:21.163 17:04:13 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.163 17:04:13 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.163 17:04:13 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.163 17:04:13 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.163 17:04:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.163 [2024-11-19 17:04:13.506017] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:21.163 [2024-11-19 17:04:13.506149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60785 ] 00:05:21.163 [2024-11-19 17:04:13.651511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.421 [2024-11-19 17:04:13.721768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.679 17:04:14 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.679 17:04:14 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:21.679 17:04:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60785 00:05:21.679 17:04:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60785 00:05:21.679 17:04:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:22.244 17:04:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60785 00:05:22.244 17:04:14 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60785 ']' 00:05:22.244 17:04:14 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60785 00:05:22.244 17:04:14 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:22.244 17:04:14 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.244 17:04:14 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60785 00:05:22.244 17:04:14 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.244 17:04:14 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.244 killing process with pid 60785 00:05:22.244 17:04:14 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60785' 00:05:22.244 17:04:14 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60785 00:05:22.244 17:04:14 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60785 00:05:22.810 17:04:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60785 00:05:22.810 17:04:15 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:22.810 17:04:15 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60785 00:05:22.810 17:04:15 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:22.810 17:04:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.810 17:04:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:22.810 17:04:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.810 17:04:15 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60785 00:05:22.810 17:04:15 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60785 ']' 00:05:22.810 17:04:15 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.810 17:04:15 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.810 17:04:15 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.810 17:04:15 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.810 17:04:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.810 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60785) - No such process 00:05:22.810 ERROR: process (pid: 60785) is no longer running 00:05:22.810 17:04:15 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.810 17:04:15 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:22.810 17:04:15 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:22.810 17:04:15 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:22.810 17:04:15 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:22.810 17:04:15 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:22.810 17:04:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:22.810 17:04:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:22.810 17:04:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:22.810 17:04:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:22.810 00:05:22.810 real 0m1.703s 00:05:22.810 user 0m1.605s 00:05:22.810 sys 0m0.657s 00:05:22.810 17:04:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.810 17:04:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.810 ************************************ 00:05:22.810 END TEST default_locks 00:05:22.810 ************************************ 00:05:22.810 17:04:15 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:22.810 17:04:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.810 17:04:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.810 17:04:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.810 ************************************ 00:05:22.810 START TEST default_locks_via_rpc 00:05:22.810 ************************************ 00:05:22.810 17:04:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:22.810 17:04:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60841 00:05:22.810 17:04:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.810 17:04:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60841 00:05:22.810 17:04:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60841 ']' 00:05:22.810 17:04:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.811 17:04:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.811 17:04:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.811 17:04:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.811 17:04:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.811 [2024-11-19 17:04:15.264662] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:22.811 [2024-11-19 17:04:15.264794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60841 ] 00:05:23.069 [2024-11-19 17:04:15.414057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.069 [2024-11-19 17:04:15.494791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.033 17:04:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.033 17:04:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:24.033 17:04:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:24.033 17:04:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.033 17:04:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.033 17:04:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.033 17:04:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:24.033 17:04:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:24.033 17:04:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:24.033 17:04:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:24.033 17:04:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:24.033 17:04:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.033 17:04:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.033 17:04:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.033 17:04:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60841 00:05:24.033 17:04:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.033 17:04:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60841 00:05:24.291 17:04:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60841 00:05:24.291 17:04:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60841 ']' 00:05:24.291 17:04:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60841 00:05:24.291 17:04:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:24.291 17:04:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.291 17:04:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60841 00:05:24.291 17:04:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.291 17:04:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.291 killing process with pid 60841 00:05:24.291 17:04:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60841' 00:05:24.291 17:04:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60841 00:05:24.291 17:04:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60841 00:05:24.858 00:05:24.858 real 0m2.109s 00:05:24.858 user 0m2.228s 00:05:24.858 sys 0m0.661s 00:05:24.858 17:04:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.858 ************************************ 00:05:24.858 END TEST default_locks_via_rpc 00:05:24.858 ************************************ 00:05:24.858 17:04:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.858 17:04:17 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:24.858 17:04:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.858 17:04:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.858 17:04:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.858 ************************************ 00:05:24.858 START TEST non_locking_app_on_locked_coremask 00:05:24.858 ************************************ 00:05:24.858 17:04:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:24.858 17:04:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60910 00:05:24.858 17:04:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60910 /var/tmp/spdk.sock 00:05:24.858 17:04:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.858 17:04:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60910 ']' 00:05:24.858 17:04:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.858 17:04:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.858 17:04:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.858 17:04:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.858 17:04:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.117 [2024-11-19 17:04:17.419126] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:25.117 [2024-11-19 17:04:17.419225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60910 ] 00:05:25.117 [2024-11-19 17:04:17.565107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.117 [2024-11-19 17:04:17.645815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.052 17:04:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.052 17:04:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:26.052 17:04:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60938 00:05:26.052 17:04:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:26.052 17:04:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60938 /var/tmp/spdk2.sock 00:05:26.052 17:04:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60938 ']' 00:05:26.052 17:04:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.052 17:04:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.052 17:04:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.052 17:04:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.052 17:04:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.052 [2024-11-19 17:04:18.536126] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:26.052 [2024-11-19 17:04:18.536229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60938 ] 00:05:26.311 [2024-11-19 17:04:18.696617] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:26.311 [2024-11-19 17:04:18.696687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.569 [2024-11-19 17:04:18.861229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.135 17:04:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.135 17:04:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:27.135 17:04:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60910 00:05:27.135 17:04:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60910 00:05:27.135 17:04:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.070 17:04:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60910 00:05:28.070 17:04:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60910 ']' 00:05:28.070 17:04:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60910 00:05:28.070 17:04:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:28.070 17:04:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.070 17:04:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60910 00:05:28.070 17:04:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.070 killing process with pid 60910 00:05:28.070 17:04:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.070 17:04:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60910' 00:05:28.070 17:04:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60910 00:05:28.070 17:04:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60910 00:05:29.005 17:04:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60938 00:05:29.005 17:04:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60938 ']' 00:05:29.005 17:04:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60938 00:05:29.005 17:04:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:29.005 17:04:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.005 17:04:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60938 00:05:29.005 17:04:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.005 killing process with pid 60938 00:05:29.005 17:04:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.005 17:04:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60938' 00:05:29.005 17:04:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60938 00:05:29.264 17:04:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60938 00:05:29.832 00:05:29.832 real 0m4.710s 00:05:29.832 user 0m5.171s 00:05:29.832 sys 0m1.274s 00:05:29.832 17:04:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.832 ************************************ 00:05:29.832 END TEST non_locking_app_on_locked_coremask 00:05:29.832 ************************************ 00:05:29.832 17:04:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.832 17:04:22 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:29.832 17:04:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.832 17:04:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.832 17:04:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.832 ************************************ 00:05:29.832 START TEST locking_app_on_unlocked_coremask 00:05:29.832 ************************************ 00:05:29.832 17:04:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:29.832 17:04:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61024 00:05:29.832 17:04:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:29.832 17:04:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61024 /var/tmp/spdk.sock 00:05:29.832 17:04:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61024 ']' 00:05:29.832 17:04:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.832 17:04:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.832 17:04:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.832 17:04:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.832 17:04:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.832 [2024-11-19 17:04:22.205199] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:29.832 [2024-11-19 17:04:22.205344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61024 ] 00:05:29.832 [2024-11-19 17:04:22.355352] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:29.832 [2024-11-19 17:04:22.355417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.091 [2024-11-19 17:04:22.437263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.027 17:04:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.027 17:04:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:31.027 17:04:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61052 00:05:31.027 17:04:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61052 /var/tmp/spdk2.sock 00:05:31.027 17:04:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61052 ']' 00:05:31.027 17:04:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:31.027 17:04:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.027 17:04:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.027 17:04:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.027 17:04:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.027 17:04:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.027 [2024-11-19 17:04:23.330762] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:31.027 [2024-11-19 17:04:23.330868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61052 ] 00:05:31.027 [2024-11-19 17:04:23.492902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.286 [2024-11-19 17:04:23.658941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.222 17:04:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.222 17:04:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:32.222 17:04:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61052 00:05:32.222 17:04:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61052 00:05:32.222 17:04:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.788 17:04:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61024 00:05:32.788 17:04:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61024 ']' 00:05:32.788 17:04:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61024 00:05:32.788 17:04:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:32.788 17:04:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.788 17:04:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61024 00:05:32.788 17:04:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.788 17:04:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.788 killing process with pid 61024 00:05:32.788 17:04:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61024' 00:05:32.788 17:04:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61024 00:05:32.788 17:04:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61024 00:05:34.166 17:04:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61052 00:05:34.166 17:04:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61052 ']' 00:05:34.166 17:04:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61052 00:05:34.166 17:04:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:34.166 17:04:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.166 17:04:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61052 00:05:34.166 killing process with pid 61052 00:05:34.166 17:04:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.166 17:04:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.166 17:04:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61052' 00:05:34.166 17:04:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61052 00:05:34.166 17:04:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61052 00:05:34.425 ************************************ 00:05:34.425 END TEST locking_app_on_unlocked_coremask 00:05:34.425 ************************************ 00:05:34.425 00:05:34.425 real 0m4.793s 00:05:34.425 user 0m5.214s 00:05:34.425 sys 0m1.384s 00:05:34.425 17:04:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.425 17:04:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.685 17:04:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:34.685 17:04:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.685 17:04:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.685 17:04:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.685 ************************************ 00:05:34.685 START TEST locking_app_on_locked_coremask 00:05:34.685 ************************************ 00:05:34.685 17:04:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:34.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.685 17:04:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61136 00:05:34.685 17:04:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.685 17:04:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61136 /var/tmp/spdk.sock 00:05:34.685 17:04:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61136 ']' 00:05:34.685 17:04:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.685 17:04:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.685 17:04:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.685 17:04:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.685 17:04:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.685 [2024-11-19 17:04:27.043231] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:34.685 [2024-11-19 17:04:27.043547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61136 ] 00:05:34.685 [2024-11-19 17:04:27.186035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.944 [2024-11-19 17:04:27.260600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.511 17:04:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.511 17:04:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:35.511 17:04:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61164 00:05:35.511 17:04:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61164 /var/tmp/spdk2.sock 00:05:35.511 17:04:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:35.511 17:04:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:35.511 17:04:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61164 /var/tmp/spdk2.sock 00:05:35.511 17:04:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:35.511 17:04:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.511 17:04:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:35.769 17:04:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.769 17:04:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61164 /var/tmp/spdk2.sock 00:05:35.769 17:04:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61164 ']' 00:05:35.769 17:04:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.769 17:04:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.769 17:04:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.769 17:04:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.769 17:04:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.769 [2024-11-19 17:04:28.108874] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:35.770 [2024-11-19 17:04:28.109266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61164 ] 00:05:35.770 [2024-11-19 17:04:28.269728] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61136 has claimed it. 00:05:35.770 [2024-11-19 17:04:28.269838] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:36.335 ERROR: process (pid: 61164) is no longer running 00:05:36.335 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61164) - No such process 00:05:36.335 17:04:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.335 17:04:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:36.335 17:04:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:36.335 17:04:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:36.335 17:04:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:36.335 17:04:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:36.335 17:04:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61136 00:05:36.335 17:04:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61136 00:05:36.335 17:04:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.901 17:04:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61136 00:05:36.901 17:04:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61136 ']' 00:05:36.901 17:04:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61136 00:05:36.901 17:04:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:36.901 17:04:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.901 17:04:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61136 00:05:36.901 killing process with pid 61136 00:05:36.901 17:04:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.901 17:04:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.901 17:04:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61136' 00:05:36.901 17:04:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61136 00:05:36.901 17:04:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61136 00:05:37.467 00:05:37.467 real 0m2.779s 00:05:37.467 user 0m3.164s 00:05:37.467 sys 0m0.714s 00:05:37.467 17:04:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.467 ************************************ 00:05:37.467 END TEST locking_app_on_locked_coremask 00:05:37.467 ************************************ 00:05:37.467 17:04:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.467 17:04:29 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:37.467 17:04:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.467 17:04:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.467 17:04:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.467 ************************************ 00:05:37.467 START TEST locking_overlapped_coremask 00:05:37.467 ************************************ 00:05:37.467 17:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:37.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.467 17:04:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61221 00:05:37.467 17:04:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61221 /var/tmp/spdk.sock 00:05:37.467 17:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61221 ']' 00:05:37.467 17:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.467 17:04:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:37.467 17:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.467 17:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.467 17:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.467 17:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.467 [2024-11-19 17:04:29.868765] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:37.467 [2024-11-19 17:04:29.869177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61221 ] 00:05:37.725 [2024-11-19 17:04:30.016712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:37.725 [2024-11-19 17:04:30.099312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.725 [2024-11-19 17:04:30.100143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.725 [2024-11-19 17:04:30.100201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.660 17:04:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.660 17:04:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:38.660 17:04:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:38.660 17:04:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61251 00:05:38.660 17:04:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61251 /var/tmp/spdk2.sock 00:05:38.660 17:04:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:38.660 17:04:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61251 /var/tmp/spdk2.sock 00:05:38.660 17:04:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:38.660 17:04:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.660 17:04:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:38.660 17:04:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.660 17:04:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61251 /var/tmp/spdk2.sock 00:05:38.660 17:04:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61251 ']' 00:05:38.660 17:04:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.660 17:04:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.660 17:04:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.660 17:04:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.660 17:04:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.660 [2024-11-19 17:04:30.943549] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:38.660 [2024-11-19 17:04:30.943859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61251 ] 00:05:38.660 [2024-11-19 17:04:31.115352] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61221 has claimed it. 00:05:38.660 [2024-11-19 17:04:31.115451] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:39.227 ERROR: process (pid: 61251) is no longer running 00:05:39.227 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61251) - No such process 00:05:39.227 17:04:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.227 17:04:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:39.227 17:04:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:39.227 17:04:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:39.227 17:04:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:39.227 17:04:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:39.227 17:04:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:39.227 17:04:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:39.227 17:04:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:39.227 17:04:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:39.227 17:04:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61221 00:05:39.227 17:04:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 61221 ']' 00:05:39.227 17:04:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 61221 00:05:39.227 17:04:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:39.227 17:04:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.227 17:04:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61221 00:05:39.227 17:04:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.227 17:04:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.227 17:04:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61221' 00:05:39.227 killing process with pid 61221 00:05:39.227 17:04:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 61221 00:05:39.227 17:04:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 61221 00:05:40.164 00:05:40.164 real 0m2.553s 00:05:40.164 user 0m7.153s 00:05:40.164 sys 0m0.560s 00:05:40.164 ************************************ 00:05:40.164 END TEST locking_overlapped_coremask 00:05:40.164 ************************************ 00:05:40.164 17:04:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.164 17:04:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.164 17:04:32 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:40.164 17:04:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.164 17:04:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.164 17:04:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.164 ************************************ 00:05:40.164 START TEST locking_overlapped_coremask_via_rpc 00:05:40.164 ************************************ 00:05:40.164 17:04:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:40.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.164 17:04:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61303 00:05:40.164 17:04:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61303 /var/tmp/spdk.sock 00:05:40.164 17:04:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61303 ']' 00:05:40.164 17:04:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.164 17:04:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:40.164 17:04:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.164 17:04:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.164 17:04:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.164 17:04:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.164 [2024-11-19 17:04:32.480490] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:40.164 [2024-11-19 17:04:32.480610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61303 ] 00:05:40.164 [2024-11-19 17:04:32.632923] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:40.164 [2024-11-19 17:04:32.633005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:40.422 [2024-11-19 17:04:32.726503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.422 [2024-11-19 17:04:32.726653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.422 [2024-11-19 17:04:32.726662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.989 17:04:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.989 17:04:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:40.989 17:04:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61333 00:05:40.989 17:04:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:40.989 17:04:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61333 /var/tmp/spdk2.sock 00:05:40.989 17:04:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61333 ']' 00:05:40.989 17:04:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.989 17:04:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.989 17:04:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.989 17:04:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.989 17:04:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.248 [2024-11-19 17:04:33.571800] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:41.248 [2024-11-19 17:04:33.572200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61333 ] 00:05:41.248 [2024-11-19 17:04:33.741165] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:41.248 [2024-11-19 17:04:33.741249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:41.507 [2024-11-19 17:04:33.921711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:41.507 [2024-11-19 17:04:33.921826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:41.507 [2024-11-19 17:04:33.921832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.118 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.118 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:42.118 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:42.118 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.118 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.118 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.118 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:42.118 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:42.118 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:42.118 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:42.118 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:42.118 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:42.118 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:42.118 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:42.118 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.118 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.118 [2024-11-19 17:04:34.646899] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61303 has claimed it. 00:05:42.376 2024/11/19 17:04:34 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:05:42.376 request: 00:05:42.376 { 00:05:42.376 "method": "framework_enable_cpumask_locks", 00:05:42.376 "params": {} 00:05:42.376 } 00:05:42.376 Got JSON-RPC error response 00:05:42.376 GoRPCClient: error on JSON-RPC call 00:05:42.376 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:42.376 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:42.376 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:42.376 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:42.376 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:42.376 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61303 /var/tmp/spdk.sock 00:05:42.376 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61303 ']' 00:05:42.376 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.376 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.376 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.376 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.376 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.635 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.635 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:42.635 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61333 /var/tmp/spdk2.sock 00:05:42.635 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61333 ']' 00:05:42.635 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:42.635 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:42.635 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:42.635 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.635 17:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.893 17:04:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.893 17:04:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:42.893 17:04:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:42.893 17:04:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:42.893 17:04:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:42.893 17:04:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:42.893 00:05:42.893 real 0m2.829s 00:05:42.893 user 0m1.511s 00:05:42.893 sys 0m0.245s 00:05:42.893 17:04:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.893 17:04:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.893 ************************************ 00:05:42.893 END TEST locking_overlapped_coremask_via_rpc 00:05:42.893 ************************************ 00:05:42.893 17:04:35 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:42.893 17:04:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61303 ]] 00:05:42.893 17:04:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61303 00:05:42.893 17:04:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61303 ']' 00:05:42.893 17:04:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61303 00:05:42.893 17:04:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:42.893 17:04:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.893 17:04:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61303 00:05:42.893 17:04:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.893 17:04:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.893 killing process with pid 61303 00:05:42.893 17:04:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61303' 00:05:42.893 17:04:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61303 00:05:42.893 17:04:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61303 00:05:43.459 17:04:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61333 ]] 00:05:43.459 17:04:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61333 00:05:43.459 17:04:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61333 ']' 00:05:43.459 17:04:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61333 00:05:43.459 17:04:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:43.459 17:04:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.459 17:04:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61333 00:05:43.459 17:04:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:43.459 17:04:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:43.459 killing process with pid 61333 00:05:43.459 17:04:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61333' 00:05:43.459 17:04:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61333 00:05:43.459 17:04:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61333 00:05:44.026 17:04:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:44.026 17:04:36 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:44.026 17:04:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61303 ]] 00:05:44.026 17:04:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61303 00:05:44.026 17:04:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61303 ']' 00:05:44.026 17:04:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61303 00:05:44.026 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61303) - No such process 00:05:44.026 17:04:36 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61303 is not found' 00:05:44.026 Process with pid 61303 is not found 00:05:44.026 17:04:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61333 ]] 00:05:44.026 17:04:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61333 00:05:44.026 17:04:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61333 ']' 00:05:44.026 17:04:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61333 00:05:44.026 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61333) - No such process 00:05:44.026 Process with pid 61333 is not found 00:05:44.026 17:04:36 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61333 is not found' 00:05:44.026 17:04:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:44.026 00:05:44.026 real 0m23.261s 00:05:44.026 user 0m40.603s 00:05:44.026 sys 0m6.637s 00:05:44.026 17:04:36 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.026 17:04:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.026 ************************************ 00:05:44.026 END TEST cpu_locks 00:05:44.026 ************************************ 00:05:44.026 00:05:44.026 real 0m51.747s 00:05:44.026 user 1m38.615s 00:05:44.026 sys 0m10.720s 00:05:44.026 17:04:36 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.026 17:04:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.026 ************************************ 00:05:44.026 END TEST event 00:05:44.026 ************************************ 00:05:44.285 17:04:36 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:44.285 17:04:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.285 17:04:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.285 17:04:36 -- common/autotest_common.sh@10 -- # set +x 00:05:44.285 ************************************ 00:05:44.285 START TEST thread 00:05:44.285 ************************************ 00:05:44.285 17:04:36 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:44.285 * Looking for test storage... 00:05:44.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:44.285 17:04:36 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:44.285 17:04:36 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:44.285 17:04:36 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:44.285 17:04:36 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:44.285 17:04:36 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.285 17:04:36 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.285 17:04:36 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.285 17:04:36 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.285 17:04:36 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.285 17:04:36 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.285 17:04:36 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.285 17:04:36 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.285 17:04:36 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.285 17:04:36 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.285 17:04:36 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.285 17:04:36 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:44.285 17:04:36 thread -- scripts/common.sh@345 -- # : 1 00:05:44.285 17:04:36 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.285 17:04:36 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.285 17:04:36 thread -- scripts/common.sh@365 -- # decimal 1 00:05:44.285 17:04:36 thread -- scripts/common.sh@353 -- # local d=1 00:05:44.285 17:04:36 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.285 17:04:36 thread -- scripts/common.sh@355 -- # echo 1 00:05:44.285 17:04:36 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.285 17:04:36 thread -- scripts/common.sh@366 -- # decimal 2 00:05:44.285 17:04:36 thread -- scripts/common.sh@353 -- # local d=2 00:05:44.285 17:04:36 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.285 17:04:36 thread -- scripts/common.sh@355 -- # echo 2 00:05:44.285 17:04:36 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.285 17:04:36 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.285 17:04:36 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.285 17:04:36 thread -- scripts/common.sh@368 -- # return 0 00:05:44.285 17:04:36 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.285 17:04:36 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:44.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.285 --rc genhtml_branch_coverage=1 00:05:44.285 --rc genhtml_function_coverage=1 00:05:44.285 --rc genhtml_legend=1 00:05:44.285 --rc geninfo_all_blocks=1 00:05:44.285 --rc geninfo_unexecuted_blocks=1 00:05:44.285 00:05:44.285 ' 00:05:44.285 17:04:36 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:44.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.285 --rc genhtml_branch_coverage=1 00:05:44.285 --rc genhtml_function_coverage=1 00:05:44.285 --rc genhtml_legend=1 00:05:44.285 --rc geninfo_all_blocks=1 00:05:44.285 --rc geninfo_unexecuted_blocks=1 00:05:44.285 00:05:44.285 ' 00:05:44.285 17:04:36 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:44.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.285 --rc genhtml_branch_coverage=1 00:05:44.285 --rc genhtml_function_coverage=1 00:05:44.285 --rc genhtml_legend=1 00:05:44.285 --rc geninfo_all_blocks=1 00:05:44.285 --rc geninfo_unexecuted_blocks=1 00:05:44.285 00:05:44.285 ' 00:05:44.285 17:04:36 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:44.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.285 --rc genhtml_branch_coverage=1 00:05:44.285 --rc genhtml_function_coverage=1 00:05:44.285 --rc genhtml_legend=1 00:05:44.285 --rc geninfo_all_blocks=1 00:05:44.285 --rc geninfo_unexecuted_blocks=1 00:05:44.285 00:05:44.285 ' 00:05:44.285 17:04:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:44.285 17:04:36 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:44.285 17:04:36 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.285 17:04:36 thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.544 ************************************ 00:05:44.544 START TEST thread_poller_perf 00:05:44.544 ************************************ 00:05:44.544 17:04:36 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:44.544 [2024-11-19 17:04:36.844716] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:44.544 [2024-11-19 17:04:36.844866] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61493 ] 00:05:44.544 [2024-11-19 17:04:36.997712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.802 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:44.802 [2024-11-19 17:04:37.088669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.737 [2024-11-19T17:04:38.270Z] ====================================== 00:05:45.737 [2024-11-19T17:04:38.270Z] busy:2209893529 (cyc) 00:05:45.737 [2024-11-19T17:04:38.270Z] total_run_count: 306000 00:05:45.737 [2024-11-19T17:04:38.270Z] tsc_hz: 2200000000 (cyc) 00:05:45.737 [2024-11-19T17:04:38.270Z] ====================================== 00:05:45.737 [2024-11-19T17:04:38.270Z] poller_cost: 7221 (cyc), 3282 (nsec) 00:05:45.737 00:05:45.737 real 0m1.346s 00:05:45.737 user 0m1.179s 00:05:45.737 sys 0m0.060s 00:05:45.737 17:04:38 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.737 17:04:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:45.737 ************************************ 00:05:45.738 END TEST thread_poller_perf 00:05:45.738 ************************************ 00:05:45.738 17:04:38 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:45.738 17:04:38 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:45.738 17:04:38 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.738 17:04:38 thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.738 ************************************ 00:05:45.738 START TEST thread_poller_perf 00:05:45.738 ************************************ 00:05:45.738 17:04:38 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:45.738 [2024-11-19 17:04:38.248516] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:45.738 [2024-11-19 17:04:38.248641] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61528 ] 00:05:45.997 [2024-11-19 17:04:38.396513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.997 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:45.997 [2024-11-19 17:04:38.479376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.371 [2024-11-19T17:04:39.904Z] ====================================== 00:05:47.371 [2024-11-19T17:04:39.904Z] busy:2202592305 (cyc) 00:05:47.371 [2024-11-19T17:04:39.904Z] total_run_count: 4180000 00:05:47.371 [2024-11-19T17:04:39.904Z] tsc_hz: 2200000000 (cyc) 00:05:47.371 [2024-11-19T17:04:39.904Z] ====================================== 00:05:47.371 [2024-11-19T17:04:39.904Z] poller_cost: 526 (cyc), 239 (nsec) 00:05:47.371 00:05:47.371 real 0m1.326s 00:05:47.371 user 0m1.167s 00:05:47.371 sys 0m0.052s 00:05:47.371 17:04:39 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.371 17:04:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:47.371 ************************************ 00:05:47.371 END TEST thread_poller_perf 00:05:47.371 ************************************ 00:05:47.371 17:04:39 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:47.371 00:05:47.371 real 0m3.003s 00:05:47.371 user 0m2.510s 00:05:47.371 sys 0m0.268s 00:05:47.371 17:04:39 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.371 17:04:39 thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.371 ************************************ 00:05:47.371 END TEST thread 00:05:47.371 ************************************ 00:05:47.371 17:04:39 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:47.371 17:04:39 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:47.371 17:04:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.371 17:04:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.371 17:04:39 -- common/autotest_common.sh@10 -- # set +x 00:05:47.371 ************************************ 00:05:47.371 START TEST app_cmdline 00:05:47.371 ************************************ 00:05:47.371 17:04:39 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:47.371 * Looking for test storage... 00:05:47.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:47.371 17:04:39 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:47.371 17:04:39 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:47.371 17:04:39 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:47.371 17:04:39 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:47.371 17:04:39 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.371 17:04:39 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.371 17:04:39 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.371 17:04:39 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.371 17:04:39 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.371 17:04:39 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.371 17:04:39 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.371 17:04:39 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.371 17:04:39 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.371 17:04:39 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.371 17:04:39 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.371 17:04:39 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:47.371 17:04:39 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:47.371 17:04:39 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.371 17:04:39 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.371 17:04:39 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:47.371 17:04:39 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:47.372 17:04:39 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.372 17:04:39 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:47.372 17:04:39 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.372 17:04:39 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:47.372 17:04:39 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:47.372 17:04:39 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.372 17:04:39 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:47.372 17:04:39 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.372 17:04:39 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.372 17:04:39 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.372 17:04:39 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:47.372 17:04:39 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.372 17:04:39 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:47.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.372 --rc genhtml_branch_coverage=1 00:05:47.372 --rc genhtml_function_coverage=1 00:05:47.372 --rc genhtml_legend=1 00:05:47.372 --rc geninfo_all_blocks=1 00:05:47.372 --rc geninfo_unexecuted_blocks=1 00:05:47.372 00:05:47.372 ' 00:05:47.372 17:04:39 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:47.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.372 --rc genhtml_branch_coverage=1 00:05:47.372 --rc genhtml_function_coverage=1 00:05:47.372 --rc genhtml_legend=1 00:05:47.372 --rc geninfo_all_blocks=1 00:05:47.372 --rc geninfo_unexecuted_blocks=1 00:05:47.372 00:05:47.372 ' 00:05:47.372 17:04:39 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:47.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.372 --rc genhtml_branch_coverage=1 00:05:47.372 --rc genhtml_function_coverage=1 00:05:47.372 --rc genhtml_legend=1 00:05:47.372 --rc geninfo_all_blocks=1 00:05:47.372 --rc geninfo_unexecuted_blocks=1 00:05:47.372 00:05:47.372 ' 00:05:47.372 17:04:39 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:47.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.372 --rc genhtml_branch_coverage=1 00:05:47.372 --rc genhtml_function_coverage=1 00:05:47.372 --rc genhtml_legend=1 00:05:47.372 --rc geninfo_all_blocks=1 00:05:47.372 --rc geninfo_unexecuted_blocks=1 00:05:47.372 00:05:47.372 ' 00:05:47.372 17:04:39 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:47.372 17:04:39 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61611 00:05:47.372 17:04:39 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61611 00:05:47.372 17:04:39 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61611 ']' 00:05:47.372 17:04:39 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.372 17:04:39 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:47.372 17:04:39 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.372 17:04:39 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.372 17:04:39 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.372 17:04:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:47.630 [2024-11-19 17:04:39.930768] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:47.630 [2024-11-19 17:04:39.930885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61611 ] 00:05:47.630 [2024-11-19 17:04:40.076502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.630 [2024-11-19 17:04:40.127707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.198 17:04:40 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.198 17:04:40 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:48.198 17:04:40 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:48.456 { 00:05:48.456 "fields": { 00:05:48.456 "commit": "f22e807f1", 00:05:48.456 "major": 25, 00:05:48.456 "minor": 1, 00:05:48.456 "patch": 0, 00:05:48.457 "suffix": "-pre" 00:05:48.457 }, 00:05:48.457 "version": "SPDK v25.01-pre git sha1 f22e807f1" 00:05:48.457 } 00:05:48.457 17:04:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:48.457 17:04:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:48.457 17:04:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:48.457 17:04:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:48.457 17:04:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:48.457 17:04:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:48.457 17:04:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:48.457 17:04:40 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.457 17:04:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:48.457 17:04:40 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.457 17:04:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:48.457 17:04:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:48.457 17:04:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:48.457 17:04:40 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:48.457 17:04:40 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:48.457 17:04:40 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:48.457 17:04:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.457 17:04:40 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:48.457 17:04:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.457 17:04:40 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:48.457 17:04:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.457 17:04:40 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:48.457 17:04:40 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:48.457 17:04:40 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:48.715 2024/11/19 17:04:41 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:05:48.715 request: 00:05:48.715 { 00:05:48.715 "method": "env_dpdk_get_mem_stats", 00:05:48.715 "params": {} 00:05:48.715 } 00:05:48.715 Got JSON-RPC error response 00:05:48.715 GoRPCClient: error on JSON-RPC call 00:05:48.715 17:04:41 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:48.715 17:04:41 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:48.715 17:04:41 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:48.715 17:04:41 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:48.715 17:04:41 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61611 00:05:48.715 17:04:41 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61611 ']' 00:05:48.715 17:04:41 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61611 00:05:48.715 17:04:41 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:48.715 17:04:41 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.715 17:04:41 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61611 00:05:48.715 17:04:41 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.715 17:04:41 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.715 killing process with pid 61611 00:05:48.715 17:04:41 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61611' 00:05:48.715 17:04:41 app_cmdline -- common/autotest_common.sh@973 -- # kill 61611 00:05:48.715 17:04:41 app_cmdline -- common/autotest_common.sh@978 -- # wait 61611 00:05:49.281 00:05:49.281 real 0m2.089s 00:05:49.281 user 0m2.418s 00:05:49.281 sys 0m0.601s 00:05:49.281 17:04:41 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.281 ************************************ 00:05:49.281 END TEST app_cmdline 00:05:49.281 17:04:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:49.281 ************************************ 00:05:49.281 17:04:41 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:49.281 17:04:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.281 17:04:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.281 17:04:41 -- common/autotest_common.sh@10 -- # set +x 00:05:49.281 ************************************ 00:05:49.282 START TEST version 00:05:49.282 ************************************ 00:05:49.282 17:04:41 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:49.540 * Looking for test storage... 00:05:49.540 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:49.540 17:04:41 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:49.540 17:04:41 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:49.540 17:04:41 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:49.540 17:04:41 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:49.540 17:04:41 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.540 17:04:41 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.540 17:04:41 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.540 17:04:41 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.540 17:04:41 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.540 17:04:41 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.540 17:04:41 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.540 17:04:41 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.540 17:04:41 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.540 17:04:41 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.540 17:04:41 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.540 17:04:41 version -- scripts/common.sh@344 -- # case "$op" in 00:05:49.540 17:04:41 version -- scripts/common.sh@345 -- # : 1 00:05:49.540 17:04:41 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.540 17:04:41 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.540 17:04:41 version -- scripts/common.sh@365 -- # decimal 1 00:05:49.540 17:04:41 version -- scripts/common.sh@353 -- # local d=1 00:05:49.541 17:04:41 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.541 17:04:41 version -- scripts/common.sh@355 -- # echo 1 00:05:49.541 17:04:41 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.541 17:04:41 version -- scripts/common.sh@366 -- # decimal 2 00:05:49.541 17:04:41 version -- scripts/common.sh@353 -- # local d=2 00:05:49.541 17:04:41 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.541 17:04:41 version -- scripts/common.sh@355 -- # echo 2 00:05:49.541 17:04:41 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.541 17:04:41 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.541 17:04:41 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.541 17:04:41 version -- scripts/common.sh@368 -- # return 0 00:05:49.541 17:04:41 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.541 17:04:41 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:49.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.541 --rc genhtml_branch_coverage=1 00:05:49.541 --rc genhtml_function_coverage=1 00:05:49.541 --rc genhtml_legend=1 00:05:49.541 --rc geninfo_all_blocks=1 00:05:49.541 --rc geninfo_unexecuted_blocks=1 00:05:49.541 00:05:49.541 ' 00:05:49.541 17:04:41 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:49.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.541 --rc genhtml_branch_coverage=1 00:05:49.541 --rc genhtml_function_coverage=1 00:05:49.541 --rc genhtml_legend=1 00:05:49.541 --rc geninfo_all_blocks=1 00:05:49.541 --rc geninfo_unexecuted_blocks=1 00:05:49.541 00:05:49.541 ' 00:05:49.541 17:04:41 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:49.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.541 --rc genhtml_branch_coverage=1 00:05:49.541 --rc genhtml_function_coverage=1 00:05:49.541 --rc genhtml_legend=1 00:05:49.541 --rc geninfo_all_blocks=1 00:05:49.541 --rc geninfo_unexecuted_blocks=1 00:05:49.541 00:05:49.541 ' 00:05:49.541 17:04:41 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:49.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.541 --rc genhtml_branch_coverage=1 00:05:49.541 --rc genhtml_function_coverage=1 00:05:49.541 --rc genhtml_legend=1 00:05:49.541 --rc geninfo_all_blocks=1 00:05:49.541 --rc geninfo_unexecuted_blocks=1 00:05:49.541 00:05:49.541 ' 00:05:49.541 17:04:41 version -- app/version.sh@17 -- # get_header_version major 00:05:49.541 17:04:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:49.541 17:04:41 version -- app/version.sh@14 -- # cut -f2 00:05:49.541 17:04:41 version -- app/version.sh@14 -- # tr -d '"' 00:05:49.541 17:04:41 version -- app/version.sh@17 -- # major=25 00:05:49.541 17:04:41 version -- app/version.sh@18 -- # get_header_version minor 00:05:49.541 17:04:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:49.541 17:04:42 version -- app/version.sh@14 -- # cut -f2 00:05:49.541 17:04:42 version -- app/version.sh@14 -- # tr -d '"' 00:05:49.541 17:04:42 version -- app/version.sh@18 -- # minor=1 00:05:49.541 17:04:42 version -- app/version.sh@19 -- # get_header_version patch 00:05:49.541 17:04:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:49.541 17:04:42 version -- app/version.sh@14 -- # cut -f2 00:05:49.541 17:04:42 version -- app/version.sh@14 -- # tr -d '"' 00:05:49.541 17:04:42 version -- app/version.sh@19 -- # patch=0 00:05:49.541 17:04:42 version -- app/version.sh@20 -- # get_header_version suffix 00:05:49.541 17:04:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:49.541 17:04:42 version -- app/version.sh@14 -- # cut -f2 00:05:49.541 17:04:42 version -- app/version.sh@14 -- # tr -d '"' 00:05:49.541 17:04:42 version -- app/version.sh@20 -- # suffix=-pre 00:05:49.541 17:04:42 version -- app/version.sh@22 -- # version=25.1 00:05:49.541 17:04:42 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:49.541 17:04:42 version -- app/version.sh@28 -- # version=25.1rc0 00:05:49.541 17:04:42 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:49.541 17:04:42 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:49.541 17:04:42 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:49.541 17:04:42 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:49.541 00:05:49.541 real 0m0.266s 00:05:49.541 user 0m0.169s 00:05:49.541 sys 0m0.132s 00:05:49.541 17:04:42 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.541 ************************************ 00:05:49.541 17:04:42 version -- common/autotest_common.sh@10 -- # set +x 00:05:49.541 END TEST version 00:05:49.541 ************************************ 00:05:49.799 17:04:42 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:49.799 17:04:42 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:49.799 17:04:42 -- spdk/autotest.sh@194 -- # uname -s 00:05:49.799 17:04:42 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:49.799 17:04:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:49.799 17:04:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:49.800 17:04:42 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:49.800 17:04:42 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:49.800 17:04:42 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:49.800 17:04:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:49.800 17:04:42 -- common/autotest_common.sh@10 -- # set +x 00:05:49.800 17:04:42 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:49.800 17:04:42 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:49.800 17:04:42 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:49.800 17:04:42 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:49.800 17:04:42 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:49.800 17:04:42 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:49.800 17:04:42 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:49.800 17:04:42 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:49.800 17:04:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.800 17:04:42 -- common/autotest_common.sh@10 -- # set +x 00:05:49.800 ************************************ 00:05:49.800 START TEST nvmf_tcp 00:05:49.800 ************************************ 00:05:49.800 17:04:42 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:49.800 * Looking for test storage... 00:05:49.800 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:05:49.800 17:04:42 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:49.800 17:04:42 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:49.800 17:04:42 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:50.058 17:04:42 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:50.058 17:04:42 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.058 17:04:42 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.058 17:04:42 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.058 17:04:42 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.058 17:04:42 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.058 17:04:42 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.058 17:04:42 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.058 17:04:42 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.058 17:04:42 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.058 17:04:42 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.058 17:04:42 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.058 17:04:42 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:50.058 17:04:42 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:50.058 17:04:42 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.058 17:04:42 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.058 17:04:42 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:50.058 17:04:42 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:50.058 17:04:42 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.058 17:04:42 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:50.058 17:04:42 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.058 17:04:42 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:50.058 17:04:42 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:50.058 17:04:42 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.059 17:04:42 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:50.059 17:04:42 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.059 17:04:42 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.059 17:04:42 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.059 17:04:42 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:50.059 17:04:42 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.059 17:04:42 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:50.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.059 --rc genhtml_branch_coverage=1 00:05:50.059 --rc genhtml_function_coverage=1 00:05:50.059 --rc genhtml_legend=1 00:05:50.059 --rc geninfo_all_blocks=1 00:05:50.059 --rc geninfo_unexecuted_blocks=1 00:05:50.059 00:05:50.059 ' 00:05:50.059 17:04:42 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:50.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.059 --rc genhtml_branch_coverage=1 00:05:50.059 --rc genhtml_function_coverage=1 00:05:50.059 --rc genhtml_legend=1 00:05:50.059 --rc geninfo_all_blocks=1 00:05:50.059 --rc geninfo_unexecuted_blocks=1 00:05:50.059 00:05:50.059 ' 00:05:50.059 17:04:42 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:50.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.059 --rc genhtml_branch_coverage=1 00:05:50.059 --rc genhtml_function_coverage=1 00:05:50.059 --rc genhtml_legend=1 00:05:50.059 --rc geninfo_all_blocks=1 00:05:50.059 --rc geninfo_unexecuted_blocks=1 00:05:50.059 00:05:50.059 ' 00:05:50.059 17:04:42 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:50.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.059 --rc genhtml_branch_coverage=1 00:05:50.059 --rc genhtml_function_coverage=1 00:05:50.059 --rc genhtml_legend=1 00:05:50.059 --rc geninfo_all_blocks=1 00:05:50.059 --rc geninfo_unexecuted_blocks=1 00:05:50.059 00:05:50.059 ' 00:05:50.059 17:04:42 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:50.059 17:04:42 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:50.059 17:04:42 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:50.059 17:04:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:50.059 17:04:42 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.059 17:04:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.059 ************************************ 00:05:50.059 START TEST nvmf_target_core 00:05:50.059 ************************************ 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:50.059 * Looking for test storage... 00:05:50.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:50.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.059 --rc genhtml_branch_coverage=1 00:05:50.059 --rc genhtml_function_coverage=1 00:05:50.059 --rc genhtml_legend=1 00:05:50.059 --rc geninfo_all_blocks=1 00:05:50.059 --rc geninfo_unexecuted_blocks=1 00:05:50.059 00:05:50.059 ' 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:50.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.059 --rc genhtml_branch_coverage=1 00:05:50.059 --rc genhtml_function_coverage=1 00:05:50.059 --rc genhtml_legend=1 00:05:50.059 --rc geninfo_all_blocks=1 00:05:50.059 --rc geninfo_unexecuted_blocks=1 00:05:50.059 00:05:50.059 ' 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:50.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.059 --rc genhtml_branch_coverage=1 00:05:50.059 --rc genhtml_function_coverage=1 00:05:50.059 --rc genhtml_legend=1 00:05:50.059 --rc geninfo_all_blocks=1 00:05:50.059 --rc geninfo_unexecuted_blocks=1 00:05:50.059 00:05:50.059 ' 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:50.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.059 --rc genhtml_branch_coverage=1 00:05:50.059 --rc genhtml_function_coverage=1 00:05:50.059 --rc genhtml_legend=1 00:05:50.059 --rc geninfo_all_blocks=1 00:05:50.059 --rc geninfo_unexecuted_blocks=1 00:05:50.059 00:05:50.059 ' 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:50.059 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:50.319 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:50.319 ************************************ 00:05:50.319 START TEST nvmf_abort 00:05:50.319 ************************************ 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:50.319 * Looking for test storage... 00:05:50.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.319 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:50.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.320 --rc genhtml_branch_coverage=1 00:05:50.320 --rc genhtml_function_coverage=1 00:05:50.320 --rc genhtml_legend=1 00:05:50.320 --rc geninfo_all_blocks=1 00:05:50.320 --rc geninfo_unexecuted_blocks=1 00:05:50.320 00:05:50.320 ' 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:50.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.320 --rc genhtml_branch_coverage=1 00:05:50.320 --rc genhtml_function_coverage=1 00:05:50.320 --rc genhtml_legend=1 00:05:50.320 --rc geninfo_all_blocks=1 00:05:50.320 --rc geninfo_unexecuted_blocks=1 00:05:50.320 00:05:50.320 ' 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:50.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.320 --rc genhtml_branch_coverage=1 00:05:50.320 --rc genhtml_function_coverage=1 00:05:50.320 --rc genhtml_legend=1 00:05:50.320 --rc geninfo_all_blocks=1 00:05:50.320 --rc geninfo_unexecuted_blocks=1 00:05:50.320 00:05:50.320 ' 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:50.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.320 --rc genhtml_branch_coverage=1 00:05:50.320 --rc genhtml_function_coverage=1 00:05:50.320 --rc genhtml_legend=1 00:05:50.320 --rc geninfo_all_blocks=1 00:05:50.320 --rc geninfo_unexecuted_blocks=1 00:05:50.320 00:05:50.320 ' 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:50.320 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:05:50.320 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:05:50.321 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:05:50.321 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:50.321 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:05:50.321 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:05:50.321 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:05:50.321 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:05:50.321 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:05:50.579 Cannot find device "nvmf_init_br" 00:05:50.579 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true 00:05:50.579 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:05:50.579 Cannot find device "nvmf_init_br2" 00:05:50.579 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true 00:05:50.579 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:05:50.579 Cannot find device "nvmf_tgt_br" 00:05:50.579 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # true 00:05:50.579 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:05:50.579 Cannot find device "nvmf_tgt_br2" 00:05:50.579 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # true 00:05:50.579 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:05:50.579 Cannot find device "nvmf_init_br" 00:05:50.579 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # true 00:05:50.579 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:05:50.579 Cannot find device "nvmf_init_br2" 00:05:50.579 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # true 00:05:50.579 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:05:50.579 Cannot find device "nvmf_tgt_br" 00:05:50.579 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # true 00:05:50.579 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:05:50.579 Cannot find device "nvmf_tgt_br2" 00:05:50.579 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # true 00:05:50.579 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:05:50.579 Cannot find device "nvmf_br" 00:05:50.579 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # true 00:05:50.579 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:05:50.579 Cannot find device "nvmf_init_if" 00:05:50.579 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # true 00:05:50.579 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:05:50.579 Cannot find device "nvmf_init_if2" 00:05:50.579 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # true 00:05:50.579 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:05:50.579 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:05:50.579 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # true 00:05:50.579 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:05:50.579 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:05:50.579 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # true 00:05:50.579 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:05:50.579 17:04:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:05:50.579 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:05:50.579 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:05:50.579 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:05:50.579 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:05:50.579 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:05:50.838 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:05:50.838 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.110 ms 00:05:50.838 00:05:50.838 --- 10.0.0.3 ping statistics --- 00:05:50.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:50.838 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:05:50.838 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:05:50.838 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:05:50.838 00:05:50.838 --- 10.0.0.4 ping statistics --- 00:05:50.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:50.838 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:05:50.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:50.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:05:50.838 00:05:50.838 --- 10.0.0.1 ping statistics --- 00:05:50.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:50.838 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:05:50.838 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:05:51.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:51.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:05:51.096 00:05:51.096 --- 10.0.0.2 ping statistics --- 00:05:51.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:51.096 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:05:51.096 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:51.096 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:05:51.096 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:51.096 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:51.096 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:51.096 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:51.097 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:51.097 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:51.097 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:51.097 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:51.097 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:51.097 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:51.097 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.097 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=62037 00:05:51.097 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:51.097 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 62037 00:05:51.097 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 62037 ']' 00:05:51.097 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.097 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.097 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.097 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.097 17:04:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.097 [2024-11-19 17:04:43.465264] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:51.097 [2024-11-19 17:04:43.465381] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:51.097 [2024-11-19 17:04:43.620432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:51.355 [2024-11-19 17:04:43.696865] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:51.355 [2024-11-19 17:04:43.696918] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:51.355 [2024-11-19 17:04:43.696932] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:51.355 [2024-11-19 17:04:43.696942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:51.355 [2024-11-19 17:04:43.696952] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:51.355 [2024-11-19 17:04:43.698334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.355 [2024-11-19 17:04:43.698486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.355 [2024-11-19 17:04:43.698492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:52.317 [2024-11-19 17:04:44.586873] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:52.317 Malloc0 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:52.317 Delay0 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:52.317 [2024-11-19 17:04:44.660420] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.317 17:04:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:52.576 [2024-11-19 17:04:44.860764] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:54.477 Initializing NVMe Controllers 00:05:54.477 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:05:54.477 controller IO queue size 128 less than required 00:05:54.477 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:54.477 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:54.477 Initialization complete. Launching workers. 00:05:54.477 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 27019 00:05:54.477 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27084, failed to submit 62 00:05:54.477 success 27023, unsuccessful 61, failed 0 00:05:54.477 17:04:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:54.477 17:04:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.477 17:04:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.477 17:04:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.477 17:04:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:54.477 17:04:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:54.477 17:04:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:54.477 17:04:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:54.477 17:04:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:54.477 17:04:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:54.477 17:04:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:54.477 17:04:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:54.477 rmmod nvme_tcp 00:05:54.477 rmmod nvme_fabrics 00:05:54.477 rmmod nvme_keyring 00:05:54.477 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:54.477 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:54.477 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:54.477 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 62037 ']' 00:05:54.477 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 62037 00:05:54.477 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 62037 ']' 00:05:54.477 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 62037 00:05:54.736 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:54.736 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.736 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62037 00:05:54.736 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:54.736 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:54.736 killing process with pid 62037 00:05:54.736 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62037' 00:05:54.736 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 62037 00:05:54.736 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 62037 00:05:54.736 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:54.736 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:54.736 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:54.736 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:54.994 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:54.994 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:54.995 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:54.995 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:54.995 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:05:54.995 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:05:54.995 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:05:54.995 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:05:54.995 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:05:54.995 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:05:54.995 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:05:54.995 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:05:54.995 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:05:54.995 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:05:54.995 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:05:54.995 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:05:54.995 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:05:54.995 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:05:54.995 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:05:54.995 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:54.995 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:54.995 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:54.995 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:05:54.995 00:05:54.995 real 0m4.890s 00:05:54.995 user 0m12.933s 00:05:54.995 sys 0m1.148s 00:05:54.995 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.995 ************************************ 00:05:54.995 END TEST nvmf_abort 00:05:54.995 ************************************ 00:05:54.995 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:55.254 ************************************ 00:05:55.254 START TEST nvmf_ns_hotplug_stress 00:05:55.254 ************************************ 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:55.254 * Looking for test storage... 00:05:55.254 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:55.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.254 --rc genhtml_branch_coverage=1 00:05:55.254 --rc genhtml_function_coverage=1 00:05:55.254 --rc genhtml_legend=1 00:05:55.254 --rc geninfo_all_blocks=1 00:05:55.254 --rc geninfo_unexecuted_blocks=1 00:05:55.254 00:05:55.254 ' 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:55.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.254 --rc genhtml_branch_coverage=1 00:05:55.254 --rc genhtml_function_coverage=1 00:05:55.254 --rc genhtml_legend=1 00:05:55.254 --rc geninfo_all_blocks=1 00:05:55.254 --rc geninfo_unexecuted_blocks=1 00:05:55.254 00:05:55.254 ' 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:55.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.254 --rc genhtml_branch_coverage=1 00:05:55.254 --rc genhtml_function_coverage=1 00:05:55.254 --rc genhtml_legend=1 00:05:55.254 --rc geninfo_all_blocks=1 00:05:55.254 --rc geninfo_unexecuted_blocks=1 00:05:55.254 00:05:55.254 ' 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:55.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.254 --rc genhtml_branch_coverage=1 00:05:55.254 --rc genhtml_function_coverage=1 00:05:55.254 --rc genhtml_legend=1 00:05:55.254 --rc geninfo_all_blocks=1 00:05:55.254 --rc geninfo_unexecuted_blocks=1 00:05:55.254 00:05:55.254 ' 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:55.254 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:55.255 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:05:55.255 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:05:55.514 Cannot find device "nvmf_init_br" 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:05:55.514 Cannot find device "nvmf_init_br2" 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:05:55.514 Cannot find device "nvmf_tgt_br" 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:05:55.514 Cannot find device "nvmf_tgt_br2" 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:05:55.514 Cannot find device "nvmf_init_br" 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:05:55.514 Cannot find device "nvmf_init_br2" 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:05:55.514 Cannot find device "nvmf_tgt_br" 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:05:55.514 Cannot find device "nvmf_tgt_br2" 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:05:55.514 Cannot find device "nvmf_br" 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:05:55.514 Cannot find device "nvmf_init_if" 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:05:55.514 Cannot find device "nvmf_init_if2" 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:05:55.514 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:05:55.514 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:05:55.514 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:05:55.515 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:05:55.515 17:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:05:55.515 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:05:55.515 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:05:55.515 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:05:55.515 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:05:55.515 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:05:55.515 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:05:55.515 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:05:55.515 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:05:55.515 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:05:55.773 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:05:55.773 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:05:55.773 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:05:55.773 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:05:55.773 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:05:55.773 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:05:55.773 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:05:55.773 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:05:55.773 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:05:55.773 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:05:55.773 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:05:55.773 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:05:55.773 00:05:55.773 --- 10.0.0.3 ping statistics --- 00:05:55.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:55.773 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:05:55.773 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:05:55.773 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:05:55.773 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:05:55.773 00:05:55.773 --- 10.0.0.4 ping statistics --- 00:05:55.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:55.773 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:05:55.773 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:05:55.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:55.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:05:55.773 00:05:55.773 --- 10.0.0.1 ping statistics --- 00:05:55.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:55.774 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:05:55.774 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:05:55.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:55.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:05:55.774 00:05:55.774 --- 10.0.0.2 ping statistics --- 00:05:55.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:55.774 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:05:55.774 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:55.774 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:05:55.774 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:55.774 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:55.774 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:55.774 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:55.774 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:55.774 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:55.774 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:55.774 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:55.774 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:55.774 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:55.774 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:55.774 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=62355 00:05:55.774 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:55.774 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 62355 00:05:55.774 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 62355 ']' 00:05:55.774 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.774 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.774 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.774 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.774 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:55.774 [2024-11-19 17:04:48.246231] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:55.774 [2024-11-19 17:04:48.246415] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:56.033 [2024-11-19 17:04:48.405140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.033 [2024-11-19 17:04:48.451310] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:56.033 [2024-11-19 17:04:48.451587] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:56.033 [2024-11-19 17:04:48.451756] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:56.033 [2024-11-19 17:04:48.451816] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:56.033 [2024-11-19 17:04:48.451910] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:56.033 [2024-11-19 17:04:48.453523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.033 [2024-11-19 17:04:48.453686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.033 [2024-11-19 17:04:48.453855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.292 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.292 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:56.292 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:56.292 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:56.292 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:56.292 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:56.292 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:56.292 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:56.551 [2024-11-19 17:04:48.913277] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:56.551 17:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:56.810 17:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:05:57.070 [2024-11-19 17:04:49.493951] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:05:57.070 17:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:05:57.329 17:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:57.588 Malloc0 00:05:57.588 17:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:57.847 Delay0 00:05:57.847 17:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.105 17:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:58.672 NULL1 00:05:58.672 17:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:58.672 17:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=62478 00:05:58.672 17:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:58.672 17:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:05:58.672 17:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.050 Read completed with error (sct=0, sc=11) 00:06:00.050 17:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.050 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.050 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.050 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.050 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.309 17:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:00.309 17:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:00.568 true 00:06:00.568 17:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:00.568 17:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.521 17:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.789 17:04:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:01.790 17:04:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:02.048 true 00:06:02.048 17:04:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:02.048 17:04:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.307 17:04:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.566 17:04:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:02.566 17:04:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:02.825 true 00:06:02.825 17:04:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:02.825 17:04:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.083 17:04:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.342 17:04:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:03.342 17:04:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:03.599 true 00:06:03.599 17:04:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:03.599 17:04:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.534 17:04:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.534 17:04:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:04.534 17:04:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:04.793 true 00:06:04.793 17:04:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:04.793 17:04:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.051 17:04:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.310 17:04:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:05.310 17:04:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:05.569 true 00:06:05.569 17:04:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:05.569 17:04:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.829 17:04:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.088 17:04:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:06.088 17:04:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:06.346 true 00:06:06.346 17:04:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:06.346 17:04:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.282 17:04:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.540 17:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:07.540 17:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:07.798 true 00:06:07.798 17:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:07.798 17:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.056 17:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.315 17:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:08.315 17:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:08.574 true 00:06:08.574 17:05:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:08.574 17:05:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.832 17:05:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.091 17:05:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:09.091 17:05:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:09.350 true 00:06:09.609 17:05:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:09.609 17:05:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.545 17:05:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.545 17:05:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:10.545 17:05:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:10.806 true 00:06:11.070 17:05:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:11.070 17:05:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.070 17:05:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.329 17:05:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:11.329 17:05:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:11.588 true 00:06:11.588 17:05:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:11.588 17:05:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.847 17:05:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.106 17:05:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:12.106 17:05:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:12.364 true 00:06:12.364 17:05:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:12.364 17:05:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.739 17:05:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.739 17:05:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:13.739 17:05:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:13.998 true 00:06:13.998 17:05:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:13.998 17:05:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.256 17:05:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.515 17:05:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:14.515 17:05:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:14.773 true 00:06:14.773 17:05:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:14.773 17:05:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.339 17:05:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.597 17:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:15.597 17:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:15.856 true 00:06:15.856 17:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:15.856 17:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.116 17:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.375 17:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:16.375 17:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:16.634 true 00:06:16.634 17:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:16.634 17:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.891 17:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.150 17:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:17.150 17:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:17.409 true 00:06:17.409 17:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:17.409 17:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.345 17:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.604 17:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:18.604 17:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:18.862 true 00:06:18.862 17:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:18.862 17:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.123 17:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.383 17:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:19.383 17:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:19.642 true 00:06:19.642 17:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:19.642 17:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.900 17:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.160 17:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:20.160 17:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:20.420 true 00:06:20.420 17:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:20.420 17:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.356 17:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.924 17:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:21.924 17:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:21.924 true 00:06:21.924 17:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:21.924 17:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.182 17:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.441 17:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:22.441 17:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:22.699 true 00:06:22.699 17:05:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:22.699 17:05:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.958 17:05:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.217 17:05:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:23.217 17:05:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:23.476 true 00:06:23.476 17:05:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:23.476 17:05:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.413 17:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.672 17:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:24.672 17:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:24.931 true 00:06:24.931 17:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:24.931 17:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.498 17:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.498 17:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:25.498 17:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:25.756 true 00:06:25.756 17:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:25.756 17:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.015 17:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.273 17:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:26.273 17:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:26.533 true 00:06:26.533 17:05:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:26.533 17:05:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.471 17:05:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.729 17:05:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:27.729 17:05:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:27.987 true 00:06:27.987 17:05:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:27.987 17:05:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.246 17:05:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.506 17:05:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:28.506 17:05:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:28.764 true 00:06:28.764 17:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:28.764 17:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.023 Initializing NVMe Controllers 00:06:29.023 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:06:29.023 Controller IO queue size 128, less than required. 00:06:29.023 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:29.023 Controller IO queue size 128, less than required. 00:06:29.023 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:29.023 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:29.023 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:29.023 Initialization complete. Launching workers. 00:06:29.023 ======================================================== 00:06:29.023 Latency(us) 00:06:29.023 Device Information : IOPS MiB/s Average min max 00:06:29.023 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 370.62 0.18 138434.06 3394.40 1032280.15 00:06:29.023 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8414.38 4.11 15211.74 2705.47 601237.95 00:06:29.023 ======================================================== 00:06:29.023 Total : 8785.00 4.29 20410.21 2705.47 1032280.15 00:06:29.023 00:06:29.023 17:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.282 17:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:29.282 17:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:29.541 true 00:06:29.541 17:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62478 00:06:29.541 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (62478) - No such process 00:06:29.541 17:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 62478 00:06:29.541 17:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.801 17:05:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:30.059 17:05:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:30.059 17:05:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:30.059 17:05:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:30.059 17:05:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:30.059 17:05:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:30.317 null0 00:06:30.317 17:05:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:30.317 17:05:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:30.317 17:05:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:30.577 null1 00:06:30.577 17:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:30.577 17:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:30.577 17:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:30.836 null2 00:06:30.836 17:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:30.836 17:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:30.836 17:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:31.098 null3 00:06:31.098 17:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:31.098 17:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:31.098 17:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:31.357 null4 00:06:31.357 17:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:31.357 17:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:31.357 17:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:31.616 null5 00:06:31.616 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:31.616 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:31.616 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:31.875 null6 00:06:31.875 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:31.875 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:31.875 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:32.134 null7 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:32.134 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 63552 63553 63556 63557 63559 63561 63563 63565 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.135 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:32.393 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:32.394 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:32.652 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:32.653 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:32.653 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:32.653 17:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.653 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:32.653 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:32.653 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.653 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.653 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:32.912 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.912 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.912 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:32.912 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.912 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.912 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:32.912 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.912 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.912 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:32.912 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.912 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.912 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:32.912 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.912 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.912 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:32.912 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.912 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.912 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:32.912 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.912 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.912 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:32.912 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:33.171 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:33.171 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:33.171 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.171 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:33.171 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:33.171 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:33.431 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:33.431 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.431 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.431 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:33.431 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.431 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.431 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:33.431 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.431 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.431 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:33.431 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.431 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.431 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:33.431 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.431 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.431 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:33.431 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.431 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.431 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:33.431 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.431 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.431 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:33.690 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.690 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.690 17:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:33.690 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:33.690 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:33.690 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:33.690 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:33.690 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:33.690 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.690 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:33.949 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:33.949 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.949 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.949 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:33.949 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.949 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.949 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:33.949 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.949 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.949 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.949 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.949 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:33.949 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:33.949 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.949 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.949 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:34.210 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.210 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.210 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:34.210 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.210 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.210 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:34.210 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.210 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.210 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:34.210 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:34.210 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:34.210 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:34.210 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.210 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:34.470 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:34.470 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:34.470 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:34.470 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.470 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.470 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:34.470 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.470 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.470 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:34.470 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.470 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.470 17:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:34.729 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.729 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.729 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:34.729 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.729 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.729 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:34.729 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.729 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.729 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:34.729 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.729 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.729 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:34.729 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.729 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.729 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:34.729 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:34.989 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:34.989 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:34.989 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:34.989 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:34.989 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:34.989 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.989 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.989 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:34.989 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.989 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:35.249 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.249 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.249 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:35.249 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.249 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.249 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:35.249 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.249 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.249 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:35.249 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.249 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.249 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:35.249 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.249 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.249 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:35.249 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:35.249 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.249 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.249 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:35.507 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.507 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.507 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:35.507 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:35.507 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:35.507 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:35.507 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.507 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.507 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:35.507 17:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:35.508 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:35.508 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:35.766 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.766 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.766 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.766 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:35.766 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.766 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.766 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:35.766 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.766 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.766 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:35.766 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:35.766 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:35.766 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:35.766 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:36.025 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.025 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.025 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:36.025 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.025 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.025 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:36.025 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.025 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.025 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:36.025 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:36.025 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:36.025 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:36.025 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.025 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.025 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:36.284 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:36.284 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:36.284 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:36.284 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.284 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.284 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.284 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:36.542 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.542 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.542 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:36.542 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:36.542 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.542 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.542 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:36.542 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.542 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.542 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:36.542 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.542 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.542 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:36.542 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.542 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.542 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:36.542 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.542 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.542 17:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:36.802 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:36.802 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.802 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.802 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:36.802 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:36.802 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:36.802 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:36.802 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.802 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:36.802 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:37.061 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.061 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.061 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:37.061 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.061 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.061 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:37.061 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.061 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.061 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:37.061 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:37.061 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.061 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.061 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:37.061 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.061 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.061 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:37.061 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.061 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.061 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:37.320 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:37.320 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.320 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.320 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:37.320 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:37.320 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:37.320 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.320 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.320 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:37.320 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.320 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:37.579 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:37.579 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.579 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.579 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:37.579 17:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:37.579 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.579 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.579 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:37.579 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.579 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.579 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:37.579 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.579 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.579 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:37.579 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:37.579 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.579 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.579 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:37.838 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.838 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.839 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:37.839 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:37.839 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.839 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.839 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:37.839 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:37.839 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:37.839 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:37.839 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.839 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.098 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.098 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:38.098 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:38.098 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.098 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.098 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.098 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.356 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.356 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.356 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.356 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.356 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.356 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.356 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.356 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.356 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.356 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.356 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:38.356 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:38.356 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:38.356 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:38.356 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:38.356 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:38.356 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:38.356 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:38.357 rmmod nvme_tcp 00:06:38.357 rmmod nvme_fabrics 00:06:38.357 rmmod nvme_keyring 00:06:38.615 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:38.615 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:38.615 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:38.615 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 62355 ']' 00:06:38.615 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 62355 00:06:38.615 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 62355 ']' 00:06:38.615 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 62355 00:06:38.615 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:38.615 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.616 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62355 00:06:38.616 killing process with pid 62355 00:06:38.616 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:38.616 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:38.616 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62355' 00:06:38.616 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 62355 00:06:38.616 17:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 62355 00:06:38.874 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:38.874 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:38.875 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:38.875 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:38.875 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:38.875 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:38.875 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:38.875 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:38.875 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:06:38.875 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:06:38.875 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:06:38.875 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:06:38.875 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:06:38.875 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:06:38.875 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:06:38.875 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:06:38.875 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:06:38.875 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:06:38.875 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:06:38.875 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:06:38.875 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:39.133 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:39.133 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:06:39.133 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:39.133 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:39.133 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:39.133 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:06:39.133 00:06:39.133 real 0m43.912s 00:06:39.133 user 3m34.385s 00:06:39.133 sys 0m12.824s 00:06:39.133 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.133 ************************************ 00:06:39.133 END TEST nvmf_ns_hotplug_stress 00:06:39.133 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:39.133 ************************************ 00:06:39.133 17:05:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:39.133 17:05:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:39.133 17:05:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.133 17:05:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:39.133 ************************************ 00:06:39.133 START TEST nvmf_delete_subsystem 00:06:39.133 ************************************ 00:06:39.133 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:39.133 * Looking for test storage... 00:06:39.133 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:39.133 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:39.133 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:06:39.133 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:39.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.393 --rc genhtml_branch_coverage=1 00:06:39.393 --rc genhtml_function_coverage=1 00:06:39.393 --rc genhtml_legend=1 00:06:39.393 --rc geninfo_all_blocks=1 00:06:39.393 --rc geninfo_unexecuted_blocks=1 00:06:39.393 00:06:39.393 ' 00:06:39.393 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:39.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.393 --rc genhtml_branch_coverage=1 00:06:39.393 --rc genhtml_function_coverage=1 00:06:39.393 --rc genhtml_legend=1 00:06:39.393 --rc geninfo_all_blocks=1 00:06:39.393 --rc geninfo_unexecuted_blocks=1 00:06:39.393 00:06:39.394 ' 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:39.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.394 --rc genhtml_branch_coverage=1 00:06:39.394 --rc genhtml_function_coverage=1 00:06:39.394 --rc genhtml_legend=1 00:06:39.394 --rc geninfo_all_blocks=1 00:06:39.394 --rc geninfo_unexecuted_blocks=1 00:06:39.394 00:06:39.394 ' 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:39.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.394 --rc genhtml_branch_coverage=1 00:06:39.394 --rc genhtml_function_coverage=1 00:06:39.394 --rc genhtml_legend=1 00:06:39.394 --rc geninfo_all_blocks=1 00:06:39.394 --rc geninfo_unexecuted_blocks=1 00:06:39.394 00:06:39.394 ' 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:39.394 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:39.394 Cannot find device "nvmf_init_br" 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:39.394 Cannot find device "nvmf_init_br2" 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:39.394 Cannot find device "nvmf_tgt_br" 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:06:39.394 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:39.394 Cannot find device "nvmf_tgt_br2" 00:06:39.395 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:06:39.395 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:39.395 Cannot find device "nvmf_init_br" 00:06:39.395 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:06:39.395 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:39.395 Cannot find device "nvmf_init_br2" 00:06:39.395 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:06:39.395 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:39.395 Cannot find device "nvmf_tgt_br" 00:06:39.395 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:06:39.395 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:39.395 Cannot find device "nvmf_tgt_br2" 00:06:39.395 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:06:39.395 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:39.395 Cannot find device "nvmf_br" 00:06:39.395 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:06:39.395 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:39.395 Cannot find device "nvmf_init_if" 00:06:39.395 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:06:39.395 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:39.395 Cannot find device "nvmf_init_if2" 00:06:39.395 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:06:39.395 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:39.395 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:39.395 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:06:39.395 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:39.395 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:39.395 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:06:39.395 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:39.395 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:39.395 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:39.395 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:39.395 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:39.395 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:39.654 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:39.654 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:39.654 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:39.654 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:39.654 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:39.654 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:39.654 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:39.654 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:39.654 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:39.654 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:39.654 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:39.654 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:39.654 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:39.654 17:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:39.654 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:39.654 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:06:39.654 00:06:39.654 --- 10.0.0.3 ping statistics --- 00:06:39.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.654 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:39.654 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:39.654 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:06:39.654 00:06:39.654 --- 10.0.0.4 ping statistics --- 00:06:39.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.654 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:39.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:39.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:06:39.654 00:06:39.654 --- 10.0.0.1 ping statistics --- 00:06:39.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.654 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:39.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:39.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:06:39.654 00:06:39.654 --- 10.0.0.2 ping statistics --- 00:06:39.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.654 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=64962 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 64962 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 64962 ']' 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.654 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:39.914 [2024-11-19 17:05:32.206103] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:39.914 [2024-11-19 17:05:32.206190] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:39.914 [2024-11-19 17:05:32.359649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.914 [2024-11-19 17:05:32.422638] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:39.914 [2024-11-19 17:05:32.422715] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:39.914 [2024-11-19 17:05:32.422729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:39.914 [2024-11-19 17:05:32.422756] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:39.914 [2024-11-19 17:05:32.422767] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:39.914 [2024-11-19 17:05:32.424410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.914 [2024-11-19 17:05:32.424431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.173 [2024-11-19 17:05:32.652051] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.173 [2024-11-19 17:05:32.668884] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.173 NULL1 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.173 Delay0 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.173 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.439 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.439 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=65004 00:06:40.439 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:40.439 17:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:40.439 [2024-11-19 17:05:32.895217] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:42.343 17:05:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:42.343 17:05:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.343 17:05:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.602 Read completed with error (sct=0, sc=8) 00:06:42.602 Read completed with error (sct=0, sc=8) 00:06:42.602 Read completed with error (sct=0, sc=8) 00:06:42.602 starting I/O failed: -6 00:06:42.602 Read completed with error (sct=0, sc=8) 00:06:42.602 Write completed with error (sct=0, sc=8) 00:06:42.602 Read completed with error (sct=0, sc=8) 00:06:42.602 Read completed with error (sct=0, sc=8) 00:06:42.602 starting I/O failed: -6 00:06:42.602 Read completed with error (sct=0, sc=8) 00:06:42.602 Read completed with error (sct=0, sc=8) 00:06:42.602 Write completed with error (sct=0, sc=8) 00:06:42.602 Write completed with error (sct=0, sc=8) 00:06:42.602 starting I/O failed: -6 00:06:42.602 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 starting I/O failed: -6 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 starting I/O failed: -6 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 starting I/O failed: -6 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 starting I/O failed: -6 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 starting I/O failed: -6 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 starting I/O failed: -6 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 [2024-11-19 17:05:34.938596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa1d000d350 is same with the state(6) to be set 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 starting I/O failed: -6 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 starting I/O failed: -6 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 starting I/O failed: -6 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 starting I/O failed: -6 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 starting I/O failed: -6 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 starting I/O failed: -6 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 starting I/O failed: -6 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 starting I/O failed: -6 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 starting I/O failed: -6 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 starting I/O failed: -6 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 starting I/O failed: -6 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 [2024-11-19 17:05:34.941388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf27e0 is same with the state(6) to be set 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 [2024-11-19 17:05:34.941773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa1d0000c40 is same with the state(6) to be set 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Write completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:42.603 Read completed with error (sct=0, sc=8) 00:06:43.540 [2024-11-19 17:05:35.912568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cedee0 is same with the state(6) to be set 00:06:43.540 Write completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Write completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Write completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Write completed with error (sct=0, sc=8) 00:06:43.540 Write completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Write completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 [2024-11-19 17:05:35.937339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf1a50 is same with the state(6) to be set 00:06:43.540 Write completed with error (sct=0, sc=8) 00:06:43.540 Write completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Write completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Write completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Write completed with error (sct=0, sc=8) 00:06:43.540 Write completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Write completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Write completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 [2024-11-19 17:05:35.937833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4ea0 is same with the state(6) to be set 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Write completed with error (sct=0, sc=8) 00:06:43.540 Write completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Write completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Write completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 [2024-11-19 17:05:35.939561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa1d000d020 is same with the state(6) to be set 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Write completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Write completed with error (sct=0, sc=8) 00:06:43.540 Write completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 Write completed with error (sct=0, sc=8) 00:06:43.540 Write completed with error (sct=0, sc=8) 00:06:43.540 Write completed with error (sct=0, sc=8) 00:06:43.540 Write completed with error (sct=0, sc=8) 00:06:43.540 Write completed with error (sct=0, sc=8) 00:06:43.540 Read completed with error (sct=0, sc=8) 00:06:43.540 [2024-11-19 17:05:35.940556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa1d000d680 is same with the state(6) to be set 00:06:43.540 Initializing NVMe Controllers 00:06:43.540 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:06:43.540 Controller IO queue size 128, less than required. 00:06:43.540 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:43.540 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:43.540 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:43.540 Initialization complete. Launching workers. 00:06:43.541 ======================================================== 00:06:43.541 Latency(us) 00:06:43.541 Device Information : IOPS MiB/s Average min max 00:06:43.541 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.55 0.08 908765.70 1562.45 2001569.25 00:06:43.541 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.26 0.07 943982.73 1830.79 1018722.69 00:06:43.541 ======================================================== 00:06:43.541 Total : 318.81 0.16 925364.11 1562.45 2001569.25 00:06:43.541 00:06:43.541 [2024-11-19 17:05:35.941193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cedee0 (9): Bad file descriptor 00:06:43.541 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:43.541 17:05:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.541 17:05:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:43.541 17:05:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 65004 00:06:43.541 17:05:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 65004 00:06:44.110 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (65004) - No such process 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 65004 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 65004 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 65004 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:44.110 [2024-11-19 17:05:36.466757] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=65045 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65045 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:44.110 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:44.369 [2024-11-19 17:05:36.647697] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:44.627 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:44.627 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65045 00:06:44.627 17:05:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:45.195 17:05:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:45.195 17:05:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65045 00:06:45.195 17:05:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:45.761 17:05:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:45.761 17:05:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65045 00:06:45.761 17:05:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:46.019 17:05:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:46.019 17:05:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65045 00:06:46.019 17:05:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:46.590 17:05:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:46.590 17:05:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65045 00:06:46.590 17:05:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:47.157 17:05:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:47.157 17:05:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65045 00:06:47.157 17:05:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:47.416 Initializing NVMe Controllers 00:06:47.416 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:06:47.416 Controller IO queue size 128, less than required. 00:06:47.416 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:47.416 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:47.416 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:47.416 Initialization complete. Launching workers. 00:06:47.416 ======================================================== 00:06:47.416 Latency(us) 00:06:47.416 Device Information : IOPS MiB/s Average min max 00:06:47.416 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004552.45 1000170.83 1015737.79 00:06:47.416 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1008520.51 1000856.74 1019943.42 00:06:47.416 ======================================================== 00:06:47.416 Total : 256.00 0.12 1006536.48 1000170.83 1019943.42 00:06:47.416 00:06:47.674 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:47.674 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65045 00:06:47.674 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (65045) - No such process 00:06:47.674 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 65045 00:06:47.674 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:47.674 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:47.674 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:47.674 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:47.674 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:47.674 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:47.674 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:47.674 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:47.674 rmmod nvme_tcp 00:06:47.674 rmmod nvme_fabrics 00:06:47.674 rmmod nvme_keyring 00:06:47.674 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:47.674 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:47.674 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:47.674 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 64962 ']' 00:06:47.674 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 64962 00:06:47.674 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 64962 ']' 00:06:47.675 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 64962 00:06:47.675 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:47.675 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.675 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64962 00:06:47.675 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.675 killing process with pid 64962 00:06:47.675 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.675 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64962' 00:06:47.675 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 64962 00:06:47.675 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 64962 00:06:47.934 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:47.934 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:47.934 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:47.934 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:47.934 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:47.934 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:47.934 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:47.934 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:47.934 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:06:47.934 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:06:47.934 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:06:47.934 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:06:48.193 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:06:48.193 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:06:48.193 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:06:48.193 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:06:48.193 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:06:48.193 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:06:48.193 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:06:48.193 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:06:48.193 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:48.193 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:48.193 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:06:48.193 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.193 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:48.193 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.193 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:06:48.193 00:06:48.193 real 0m9.165s 00:06:48.193 user 0m27.759s 00:06:48.193 sys 0m1.578s 00:06:48.193 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.193 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:48.193 ************************************ 00:06:48.193 END TEST nvmf_delete_subsystem 00:06:48.193 ************************************ 00:06:48.452 17:05:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:48.452 17:05:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:48.452 17:05:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.452 17:05:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:48.452 ************************************ 00:06:48.452 START TEST nvmf_host_management 00:06:48.452 ************************************ 00:06:48.452 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:48.452 * Looking for test storage... 00:06:48.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:48.452 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:48.452 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:48.452 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:48.452 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:48.452 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:48.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.453 --rc genhtml_branch_coverage=1 00:06:48.453 --rc genhtml_function_coverage=1 00:06:48.453 --rc genhtml_legend=1 00:06:48.453 --rc geninfo_all_blocks=1 00:06:48.453 --rc geninfo_unexecuted_blocks=1 00:06:48.453 00:06:48.453 ' 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:48.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.453 --rc genhtml_branch_coverage=1 00:06:48.453 --rc genhtml_function_coverage=1 00:06:48.453 --rc genhtml_legend=1 00:06:48.453 --rc geninfo_all_blocks=1 00:06:48.453 --rc geninfo_unexecuted_blocks=1 00:06:48.453 00:06:48.453 ' 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:48.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.453 --rc genhtml_branch_coverage=1 00:06:48.453 --rc genhtml_function_coverage=1 00:06:48.453 --rc genhtml_legend=1 00:06:48.453 --rc geninfo_all_blocks=1 00:06:48.453 --rc geninfo_unexecuted_blocks=1 00:06:48.453 00:06:48.453 ' 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:48.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.453 --rc genhtml_branch_coverage=1 00:06:48.453 --rc genhtml_function_coverage=1 00:06:48.453 --rc genhtml_legend=1 00:06:48.453 --rc geninfo_all_blocks=1 00:06:48.453 --rc geninfo_unexecuted_blocks=1 00:06:48.453 00:06:48.453 ' 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:48.453 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:48.453 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:48.454 Cannot find device "nvmf_init_br" 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:06:48.454 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:48.726 Cannot find device "nvmf_init_br2" 00:06:48.726 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:06:48.726 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:48.726 Cannot find device "nvmf_tgt_br" 00:06:48.726 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:06:48.726 17:05:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:48.726 Cannot find device "nvmf_tgt_br2" 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:48.726 Cannot find device "nvmf_init_br" 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:48.726 Cannot find device "nvmf_init_br2" 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:48.726 Cannot find device "nvmf_tgt_br" 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:48.726 Cannot find device "nvmf_tgt_br2" 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:48.726 Cannot find device "nvmf_br" 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:48.726 Cannot find device "nvmf_init_if" 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:48.726 Cannot find device "nvmf_init_if2" 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:48.726 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:48.726 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:48.726 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:49.002 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:49.002 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:49.002 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:49.003 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:49.003 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:06:49.003 00:06:49.003 --- 10.0.0.3 ping statistics --- 00:06:49.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.003 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:49.003 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:49.003 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:06:49.003 00:06:49.003 --- 10.0.0.4 ping statistics --- 00:06:49.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.003 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:49.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:49.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:06:49.003 00:06:49.003 --- 10.0.0.1 ping statistics --- 00:06:49.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.003 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:49.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:49.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:06:49.003 00:06:49.003 --- 10.0.0.2 ping statistics --- 00:06:49.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.003 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=65337 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 65337 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 65337 ']' 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.003 17:05:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:49.003 [2024-11-19 17:05:41.437100] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:49.003 [2024-11-19 17:05:41.437179] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.262 [2024-11-19 17:05:41.590562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:49.262 [2024-11-19 17:05:41.669355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:49.262 [2024-11-19 17:05:41.669444] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:49.262 [2024-11-19 17:05:41.669459] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:49.262 [2024-11-19 17:05:41.669470] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:49.262 [2024-11-19 17:05:41.669479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:49.262 [2024-11-19 17:05:41.670959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.262 [2024-11-19 17:05:41.671115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:49.262 [2024-11-19 17:05:41.671176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:49.262 [2024-11-19 17:05:41.671181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:50.197 [2024-11-19 17:05:42.494338] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:50.197 Malloc0 00:06:50.197 [2024-11-19 17:05:42.573333] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65409 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65409 /var/tmp/bdevperf.sock 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 65409 ']' 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:50.197 { 00:06:50.197 "params": { 00:06:50.197 "name": "Nvme$subsystem", 00:06:50.197 "trtype": "$TEST_TRANSPORT", 00:06:50.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:50.197 "adrfam": "ipv4", 00:06:50.197 "trsvcid": "$NVMF_PORT", 00:06:50.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:50.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:50.197 "hdgst": ${hdgst:-false}, 00:06:50.197 "ddgst": ${ddgst:-false} 00:06:50.197 }, 00:06:50.197 "method": "bdev_nvme_attach_controller" 00:06:50.197 } 00:06:50.197 EOF 00:06:50.197 )") 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:50.197 17:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:50.197 "params": { 00:06:50.197 "name": "Nvme0", 00:06:50.197 "trtype": "tcp", 00:06:50.197 "traddr": "10.0.0.3", 00:06:50.197 "adrfam": "ipv4", 00:06:50.197 "trsvcid": "4420", 00:06:50.197 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:50.197 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:50.197 "hdgst": false, 00:06:50.197 "ddgst": false 00:06:50.197 }, 00:06:50.197 "method": "bdev_nvme_attach_controller" 00:06:50.197 }' 00:06:50.197 [2024-11-19 17:05:42.690809] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:50.197 [2024-11-19 17:05:42.690901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65409 ] 00:06:50.455 [2024-11-19 17:05:42.847123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.455 [2024-11-19 17:05:42.922815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.713 Running I/O for 10 seconds... 00:06:51.294 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.294 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:51.294 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:51.294 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.294 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.294 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.294 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:51.294 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:51.294 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:51.294 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:51.294 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:51.294 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:51.294 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:51.294 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:51.294 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:51.294 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:51.294 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.294 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.294 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.294 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:06:51.294 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:06:51.294 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:51.294 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:51.294 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:51.294 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:51.295 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.295 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.295 [2024-11-19 17:05:43.734279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x931470 is same with the state(6) to be set 00:06:51.295 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.295 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:51.295 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.295 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.295 [2024-11-19 17:05:43.746767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:51.295 [2024-11-19 17:05:43.747109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.747338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:51.295 [2024-11-19 17:05:43.747481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.747646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:51.295 [2024-11-19 17:05:43.747885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.747902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:51.295 [2024-11-19 17:05:43.747911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.747921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02660 is same with the state(6) to be set 00:06:51.295 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.295 17:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:51.295 [2024-11-19 17:05:43.754273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.295 [2024-11-19 17:05:43.754974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.295 [2024-11-19 17:05:43.754983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.754993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 [2024-11-19 17:05:43.755701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.296 [2024-11-19 17:05:43.755710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.296 task offset: 114688 on job bdev=Nvme0n1 fails 00:06:51.296 00:06:51.296 Latency(us) 00:06:51.296 [2024-11-19T17:05:43.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:51.296 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:51.296 Job: Nvme0n1 ended in about 0.61 seconds with error 00:06:51.296 Verification LBA range: start 0x0 length 0x400 00:06:51.296 Nvme0n1 : 0.61 1475.11 92.19 105.37 0.00 39183.72 1936.29 44802.79 00:06:51.296 [2024-11-19T17:05:43.829Z] =================================================================================================================== 00:06:51.296 [2024-11-19T17:05:43.829Z] Total : 1475.11 92.19 105.37 0.00 39183.72 1936.29 44802.79 00:06:51.296 [2024-11-19 17:05:43.756880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:51.296 [2024-11-19 17:05:43.758727] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:51.296 [2024-11-19 17:05:43.758762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a02660 (9): Bad file descriptor 00:06:51.296 [2024-11-19 17:05:43.766017] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:52.231 17:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65409 00:06:52.231 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65409) - No such process 00:06:52.231 17:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:52.231 17:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:52.231 17:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:52.231 17:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:52.231 17:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:52.231 17:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:52.231 17:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:52.231 17:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:52.231 { 00:06:52.231 "params": { 00:06:52.231 "name": "Nvme$subsystem", 00:06:52.231 "trtype": "$TEST_TRANSPORT", 00:06:52.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:52.231 "adrfam": "ipv4", 00:06:52.231 "trsvcid": "$NVMF_PORT", 00:06:52.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:52.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:52.231 "hdgst": ${hdgst:-false}, 00:06:52.231 "ddgst": ${ddgst:-false} 00:06:52.231 }, 00:06:52.231 "method": "bdev_nvme_attach_controller" 00:06:52.231 } 00:06:52.231 EOF 00:06:52.231 )") 00:06:52.231 17:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:52.489 17:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:52.489 17:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:52.489 17:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:52.489 "params": { 00:06:52.489 "name": "Nvme0", 00:06:52.489 "trtype": "tcp", 00:06:52.489 "traddr": "10.0.0.3", 00:06:52.489 "adrfam": "ipv4", 00:06:52.489 "trsvcid": "4420", 00:06:52.489 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:52.489 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:52.489 "hdgst": false, 00:06:52.489 "ddgst": false 00:06:52.489 }, 00:06:52.489 "method": "bdev_nvme_attach_controller" 00:06:52.489 }' 00:06:52.489 [2024-11-19 17:05:44.817221] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:52.489 [2024-11-19 17:05:44.817329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65460 ] 00:06:52.489 [2024-11-19 17:05:44.966120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.746 [2024-11-19 17:05:45.038100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.746 Running I/O for 1 seconds... 00:06:54.120 1505.00 IOPS, 94.06 MiB/s 00:06:54.120 Latency(us) 00:06:54.120 [2024-11-19T17:05:46.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:54.120 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:54.120 Verification LBA range: start 0x0 length 0x400 00:06:54.120 Nvme0n1 : 1.04 1543.53 96.47 0.00 0.00 40659.60 6315.29 37176.79 00:06:54.120 [2024-11-19T17:05:46.653Z] =================================================================================================================== 00:06:54.120 [2024-11-19T17:05:46.653Z] Total : 1543.53 96.47 0.00 0.00 40659.60 6315.29 37176.79 00:06:54.120 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:54.120 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:54.120 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:06:54.120 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:54.120 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:54.120 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:54.120 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:54.120 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:54.120 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:54.120 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:54.120 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:54.120 rmmod nvme_tcp 00:06:54.120 rmmod nvme_fabrics 00:06:54.120 rmmod nvme_keyring 00:06:54.379 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:54.379 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:54.379 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:54.379 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 65337 ']' 00:06:54.379 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 65337 00:06:54.379 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 65337 ']' 00:06:54.379 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 65337 00:06:54.379 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:54.379 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.379 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65337 00:06:54.379 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:54.379 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:54.379 killing process with pid 65337 00:06:54.379 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65337' 00:06:54.379 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 65337 00:06:54.379 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 65337 00:06:54.379 [2024-11-19 17:05:46.887490] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:54.638 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:54.638 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:54.638 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:54.638 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:54.638 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:54.638 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:54.638 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:54.638 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:54.638 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:06:54.638 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:06:54.638 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:06:54.639 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:06:54.639 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:06:54.639 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:06:54.639 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:06:54.639 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:06:54.639 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:06:54.639 17:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:06:54.639 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:06:54.639 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:06:54.639 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:54.639 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:54.639 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:06:54.639 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:54.639 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:54.639 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:54.639 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:06:54.639 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:54.639 00:06:54.639 real 0m6.409s 00:06:54.639 user 0m23.861s 00:06:54.639 sys 0m1.691s 00:06:54.639 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.639 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:54.639 ************************************ 00:06:54.639 END TEST nvmf_host_management 00:06:54.639 ************************************ 00:06:54.898 17:05:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:54.898 17:05:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:54.898 17:05:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.898 17:05:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:54.898 ************************************ 00:06:54.898 START TEST nvmf_lvol 00:06:54.898 ************************************ 00:06:54.898 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:54.898 * Looking for test storage... 00:06:54.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:54.898 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:54.898 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:54.898 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:54.898 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:54.898 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.898 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.898 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.898 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.898 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.898 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:54.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.899 --rc genhtml_branch_coverage=1 00:06:54.899 --rc genhtml_function_coverage=1 00:06:54.899 --rc genhtml_legend=1 00:06:54.899 --rc geninfo_all_blocks=1 00:06:54.899 --rc geninfo_unexecuted_blocks=1 00:06:54.899 00:06:54.899 ' 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:54.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.899 --rc genhtml_branch_coverage=1 00:06:54.899 --rc genhtml_function_coverage=1 00:06:54.899 --rc genhtml_legend=1 00:06:54.899 --rc geninfo_all_blocks=1 00:06:54.899 --rc geninfo_unexecuted_blocks=1 00:06:54.899 00:06:54.899 ' 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:54.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.899 --rc genhtml_branch_coverage=1 00:06:54.899 --rc genhtml_function_coverage=1 00:06:54.899 --rc genhtml_legend=1 00:06:54.899 --rc geninfo_all_blocks=1 00:06:54.899 --rc geninfo_unexecuted_blocks=1 00:06:54.899 00:06:54.899 ' 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:54.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.899 --rc genhtml_branch_coverage=1 00:06:54.899 --rc genhtml_function_coverage=1 00:06:54.899 --rc genhtml_legend=1 00:06:54.899 --rc geninfo_all_blocks=1 00:06:54.899 --rc geninfo_unexecuted_blocks=1 00:06:54.899 00:06:54.899 ' 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:54.899 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:54.899 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:54.900 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:54.900 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:54.900 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:54.900 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:54.900 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:54.900 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:54.900 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:54.900 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:54.900 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:54.900 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:54.900 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:54.900 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:54.900 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:54.900 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:54.900 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:54.900 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:54.900 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:54.900 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:54.900 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:54.900 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:54.900 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:54.900 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:54.900 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:55.159 Cannot find device "nvmf_init_br" 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:55.159 Cannot find device "nvmf_init_br2" 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:55.159 Cannot find device "nvmf_tgt_br" 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:55.159 Cannot find device "nvmf_tgt_br2" 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:55.159 Cannot find device "nvmf_init_br" 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:55.159 Cannot find device "nvmf_init_br2" 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:55.159 Cannot find device "nvmf_tgt_br" 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:55.159 Cannot find device "nvmf_tgt_br2" 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:55.159 Cannot find device "nvmf_br" 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:55.159 Cannot find device "nvmf_init_if" 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:55.159 Cannot find device "nvmf_init_if2" 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:55.159 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:55.159 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:55.159 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:55.418 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:55.418 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:55.418 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:55.418 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:55.418 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:55.418 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:55.418 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:55.418 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:55.418 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:55.418 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:55.418 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:55.418 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:55.418 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:55.418 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:55.418 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:55.418 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:55.418 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:55.418 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:06:55.418 00:06:55.418 --- 10.0.0.3 ping statistics --- 00:06:55.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.418 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:06:55.418 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:55.418 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:55.418 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:06:55.418 00:06:55.418 --- 10.0.0.4 ping statistics --- 00:06:55.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.418 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:06:55.418 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:55.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:55.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:06:55.418 00:06:55.418 --- 10.0.0.1 ping statistics --- 00:06:55.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.419 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:06:55.419 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:55.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:55.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:06:55.419 00:06:55.419 --- 10.0.0.2 ping statistics --- 00:06:55.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.419 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:06:55.419 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:55.419 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:06:55.419 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:55.419 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:55.419 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:55.419 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:55.419 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:55.419 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:55.419 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:55.419 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:55.419 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:55.419 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:55.419 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:55.419 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=65728 00:06:55.419 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 65728 00:06:55.419 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:55.419 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 65728 ']' 00:06:55.419 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.419 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.419 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.419 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.419 17:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:55.419 [2024-11-19 17:05:47.902070] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:55.419 [2024-11-19 17:05:47.902203] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.678 [2024-11-19 17:05:48.057229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:55.678 [2024-11-19 17:05:48.130404] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:55.678 [2024-11-19 17:05:48.130480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:55.678 [2024-11-19 17:05:48.130528] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:55.678 [2024-11-19 17:05:48.130550] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:55.678 [2024-11-19 17:05:48.130559] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:55.678 [2024-11-19 17:05:48.131890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.678 [2024-11-19 17:05:48.132010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.678 [2024-11-19 17:05:48.132016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.936 17:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.936 17:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:55.936 17:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:55.936 17:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:55.936 17:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:55.936 17:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:55.936 17:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:56.195 [2024-11-19 17:05:48.629577] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.195 17:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:56.820 17:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:56.820 17:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:57.108 17:05:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:57.108 17:05:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:57.108 17:05:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:57.677 17:05:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=50c2a347-8b9e-42f9-b7af-713ce268243e 00:06:57.677 17:05:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 50c2a347-8b9e-42f9-b7af-713ce268243e lvol 20 00:06:57.936 17:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=93ee5bef-5256-4c06-b4f3-7fb67bf09655 00:06:57.936 17:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:58.195 17:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 93ee5bef-5256-4c06-b4f3-7fb67bf09655 00:06:58.454 17:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:06:58.712 [2024-11-19 17:05:51.079846] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:58.712 17:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:58.972 17:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65865 00:06:58.972 17:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:58.972 17:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:59.908 17:05:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 93ee5bef-5256-4c06-b4f3-7fb67bf09655 MY_SNAPSHOT 00:07:00.476 17:05:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1c500d34-e7d2-41da-b7ab-05c1fc72d9bc 00:07:00.476 17:05:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 93ee5bef-5256-4c06-b4f3-7fb67bf09655 30 00:07:00.735 17:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 1c500d34-e7d2-41da-b7ab-05c1fc72d9bc MY_CLONE 00:07:00.993 17:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b10fd43d-ea85-4e94-89aa-c0db6073a626 00:07:00.993 17:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate b10fd43d-ea85-4e94-89aa-c0db6073a626 00:07:01.560 17:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65865 00:07:09.673 Initializing NVMe Controllers 00:07:09.673 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:07:09.673 Controller IO queue size 128, less than required. 00:07:09.673 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:09.673 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:09.673 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:09.673 Initialization complete. Launching workers. 00:07:09.673 ======================================================== 00:07:09.673 Latency(us) 00:07:09.673 Device Information : IOPS MiB/s Average min max 00:07:09.673 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10797.70 42.18 11864.92 2389.83 81191.80 00:07:09.673 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10748.30 41.99 11919.78 2776.33 57323.97 00:07:09.673 ======================================================== 00:07:09.673 Total : 21546.00 84.16 11892.29 2389.83 81191.80 00:07:09.673 00:07:09.673 17:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:09.673 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 93ee5bef-5256-4c06-b4f3-7fb67bf09655 00:07:09.932 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 50c2a347-8b9e-42f9-b7af-713ce268243e 00:07:10.192 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:10.192 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:10.192 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:10.192 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:10.192 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:10.192 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:10.192 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:10.192 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:10.192 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:10.192 rmmod nvme_tcp 00:07:10.192 rmmod nvme_fabrics 00:07:10.192 rmmod nvme_keyring 00:07:10.192 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:10.192 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:10.192 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:10.192 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 65728 ']' 00:07:10.192 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 65728 00:07:10.192 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 65728 ']' 00:07:10.192 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 65728 00:07:10.192 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:10.192 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.192 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65728 00:07:10.192 killing process with pid 65728 00:07:10.192 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.192 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.192 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65728' 00:07:10.192 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 65728 00:07:10.192 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 65728 00:07:10.451 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:10.451 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:10.451 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:10.451 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:10.451 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:10.451 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:10.451 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:10.451 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:10.451 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:10.451 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:10.710 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:10.710 17:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:10.710 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:10.710 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:10.710 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:10.710 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:10.710 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:10.710 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:10.710 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:10.710 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:10.710 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:10.710 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:10.710 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:10.710 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.710 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:10.710 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.710 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:07:10.710 00:07:10.710 real 0m16.017s 00:07:10.710 user 1m5.900s 00:07:10.710 sys 0m4.148s 00:07:10.710 ************************************ 00:07:10.710 END TEST nvmf_lvol 00:07:10.710 ************************************ 00:07:10.710 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.710 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:10.970 ************************************ 00:07:10.970 START TEST nvmf_lvs_grow 00:07:10.970 ************************************ 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:10.970 * Looking for test storage... 00:07:10.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:10.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.970 --rc genhtml_branch_coverage=1 00:07:10.970 --rc genhtml_function_coverage=1 00:07:10.970 --rc genhtml_legend=1 00:07:10.970 --rc geninfo_all_blocks=1 00:07:10.970 --rc geninfo_unexecuted_blocks=1 00:07:10.970 00:07:10.970 ' 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:10.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.970 --rc genhtml_branch_coverage=1 00:07:10.970 --rc genhtml_function_coverage=1 00:07:10.970 --rc genhtml_legend=1 00:07:10.970 --rc geninfo_all_blocks=1 00:07:10.970 --rc geninfo_unexecuted_blocks=1 00:07:10.970 00:07:10.970 ' 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:10.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.970 --rc genhtml_branch_coverage=1 00:07:10.970 --rc genhtml_function_coverage=1 00:07:10.970 --rc genhtml_legend=1 00:07:10.970 --rc geninfo_all_blocks=1 00:07:10.970 --rc geninfo_unexecuted_blocks=1 00:07:10.970 00:07:10.970 ' 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:10.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.970 --rc genhtml_branch_coverage=1 00:07:10.970 --rc genhtml_function_coverage=1 00:07:10.970 --rc genhtml_legend=1 00:07:10.970 --rc geninfo_all_blocks=1 00:07:10.970 --rc geninfo_unexecuted_blocks=1 00:07:10.970 00:07:10.970 ' 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.970 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:10.971 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:10.971 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:11.230 Cannot find device "nvmf_init_br" 00:07:11.230 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:11.230 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:11.230 Cannot find device "nvmf_init_br2" 00:07:11.230 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:11.230 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:11.230 Cannot find device "nvmf_tgt_br" 00:07:11.230 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:07:11.230 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:11.230 Cannot find device "nvmf_tgt_br2" 00:07:11.230 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:07:11.230 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:11.230 Cannot find device "nvmf_init_br" 00:07:11.230 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:07:11.230 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:11.230 Cannot find device "nvmf_init_br2" 00:07:11.230 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:07:11.230 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:11.230 Cannot find device "nvmf_tgt_br" 00:07:11.230 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:07:11.230 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:11.230 Cannot find device "nvmf_tgt_br2" 00:07:11.230 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:07:11.230 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:11.230 Cannot find device "nvmf_br" 00:07:11.230 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:07:11.230 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:11.230 Cannot find device "nvmf_init_if" 00:07:11.230 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:07:11.230 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:11.230 Cannot find device "nvmf_init_if2" 00:07:11.230 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:07:11.230 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:11.231 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:11.231 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:07:11.231 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:11.231 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:11.231 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:07:11.231 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:11.231 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:11.231 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:11.231 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:11.231 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:11.231 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:11.231 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:11.231 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:11.231 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:11.231 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:11.231 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:11.231 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:11.231 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:11.231 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:11.231 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:11.490 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:11.490 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:07:11.490 00:07:11.490 --- 10.0.0.3 ping statistics --- 00:07:11.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.490 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:11.490 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:11.490 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:07:11.490 00:07:11.490 --- 10.0.0.4 ping statistics --- 00:07:11.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.490 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:11.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:11.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:07:11.490 00:07:11.490 --- 10.0.0.1 ping statistics --- 00:07:11.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.490 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:11.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:11.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:07:11.490 00:07:11.490 --- 10.0.0.2 ping statistics --- 00:07:11.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.490 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=66283 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 66283 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 66283 ']' 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.490 17:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:11.490 [2024-11-19 17:06:03.969089] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:11.490 [2024-11-19 17:06:03.969177] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.749 [2024-11-19 17:06:04.118694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.749 [2024-11-19 17:06:04.171734] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:11.749 [2024-11-19 17:06:04.172044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:11.749 [2024-11-19 17:06:04.172250] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:11.749 [2024-11-19 17:06:04.172418] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:11.749 [2024-11-19 17:06:04.172461] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:11.749 [2024-11-19 17:06:04.173006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.008 17:06:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.008 17:06:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:12.008 17:06:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:12.008 17:06:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:12.008 17:06:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:12.008 17:06:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:12.008 17:06:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:12.268 [2024-11-19 17:06:04.651243] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:12.268 17:06:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:12.268 17:06:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.268 17:06:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.268 17:06:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:12.268 ************************************ 00:07:12.268 START TEST lvs_grow_clean 00:07:12.268 ************************************ 00:07:12.268 17:06:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:12.268 17:06:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:12.268 17:06:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:12.268 17:06:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:12.268 17:06:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:12.268 17:06:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:12.268 17:06:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:12.268 17:06:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:12.268 17:06:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:12.268 17:06:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:12.527 17:06:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:12.527 17:06:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:13.094 17:06:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=239d9be9-a8b0-4a37-b5b7-4c8f6641f92d 00:07:13.094 17:06:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 239d9be9-a8b0-4a37-b5b7-4c8f6641f92d 00:07:13.094 17:06:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:13.094 17:06:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:13.094 17:06:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:13.094 17:06:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 239d9be9-a8b0-4a37-b5b7-4c8f6641f92d lvol 150 00:07:13.353 17:06:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7d54cffd-fa59-4d81-812b-07085e8d58d2 00:07:13.353 17:06:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:13.353 17:06:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:13.612 [2024-11-19 17:06:06.084623] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:13.612 [2024-11-19 17:06:06.084718] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:13.612 true 00:07:13.612 17:06:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 239d9be9-a8b0-4a37-b5b7-4c8f6641f92d 00:07:13.612 17:06:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:14.180 17:06:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:14.180 17:06:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:14.180 17:06:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7d54cffd-fa59-4d81-812b-07085e8d58d2 00:07:14.439 17:06:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:14.697 [2024-11-19 17:06:07.205266] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:14.697 17:06:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:14.956 17:06:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66437 00:07:14.956 17:06:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:14.956 17:06:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:14.956 17:06:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66437 /var/tmp/bdevperf.sock 00:07:14.956 17:06:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 66437 ']' 00:07:14.956 17:06:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:14.956 17:06:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:14.956 17:06:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:14.956 17:06:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.956 17:06:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:15.215 [2024-11-19 17:06:07.510968] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:15.215 [2024-11-19 17:06:07.511052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66437 ] 00:07:15.215 [2024-11-19 17:06:07.659324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.215 [2024-11-19 17:06:07.726543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.149 17:06:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.149 17:06:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:16.149 17:06:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:16.408 Nvme0n1 00:07:16.408 17:06:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:16.666 [ 00:07:16.666 { 00:07:16.666 "aliases": [ 00:07:16.666 "7d54cffd-fa59-4d81-812b-07085e8d58d2" 00:07:16.666 ], 00:07:16.666 "assigned_rate_limits": { 00:07:16.666 "r_mbytes_per_sec": 0, 00:07:16.666 "rw_ios_per_sec": 0, 00:07:16.666 "rw_mbytes_per_sec": 0, 00:07:16.666 "w_mbytes_per_sec": 0 00:07:16.666 }, 00:07:16.666 "block_size": 4096, 00:07:16.666 "claimed": false, 00:07:16.666 "driver_specific": { 00:07:16.666 "mp_policy": "active_passive", 00:07:16.666 "nvme": [ 00:07:16.666 { 00:07:16.666 "ctrlr_data": { 00:07:16.666 "ana_reporting": false, 00:07:16.666 "cntlid": 1, 00:07:16.666 "firmware_revision": "25.01", 00:07:16.666 "model_number": "SPDK bdev Controller", 00:07:16.666 "multi_ctrlr": true, 00:07:16.666 "oacs": { 00:07:16.666 "firmware": 0, 00:07:16.666 "format": 0, 00:07:16.666 "ns_manage": 0, 00:07:16.666 "security": 0 00:07:16.666 }, 00:07:16.666 "serial_number": "SPDK0", 00:07:16.666 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:16.666 "vendor_id": "0x8086" 00:07:16.666 }, 00:07:16.666 "ns_data": { 00:07:16.666 "can_share": true, 00:07:16.666 "id": 1 00:07:16.666 }, 00:07:16.666 "trid": { 00:07:16.666 "adrfam": "IPv4", 00:07:16.666 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:16.666 "traddr": "10.0.0.3", 00:07:16.666 "trsvcid": "4420", 00:07:16.666 "trtype": "TCP" 00:07:16.666 }, 00:07:16.666 "vs": { 00:07:16.666 "nvme_version": "1.3" 00:07:16.666 } 00:07:16.666 } 00:07:16.666 ] 00:07:16.666 }, 00:07:16.666 "memory_domains": [ 00:07:16.666 { 00:07:16.666 "dma_device_id": "system", 00:07:16.666 "dma_device_type": 1 00:07:16.666 } 00:07:16.666 ], 00:07:16.666 "name": "Nvme0n1", 00:07:16.666 "num_blocks": 38912, 00:07:16.666 "numa_id": -1, 00:07:16.666 "product_name": "NVMe disk", 00:07:16.666 "supported_io_types": { 00:07:16.666 "abort": true, 00:07:16.666 "compare": true, 00:07:16.666 "compare_and_write": true, 00:07:16.666 "copy": true, 00:07:16.666 "flush": true, 00:07:16.666 "get_zone_info": false, 00:07:16.666 "nvme_admin": true, 00:07:16.666 "nvme_io": true, 00:07:16.666 "nvme_io_md": false, 00:07:16.666 "nvme_iov_md": false, 00:07:16.666 "read": true, 00:07:16.666 "reset": true, 00:07:16.666 "seek_data": false, 00:07:16.666 "seek_hole": false, 00:07:16.666 "unmap": true, 00:07:16.666 "write": true, 00:07:16.666 "write_zeroes": true, 00:07:16.666 "zcopy": false, 00:07:16.666 "zone_append": false, 00:07:16.666 "zone_management": false 00:07:16.666 }, 00:07:16.666 "uuid": "7d54cffd-fa59-4d81-812b-07085e8d58d2", 00:07:16.666 "zoned": false 00:07:16.666 } 00:07:16.666 ] 00:07:16.666 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66485 00:07:16.666 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:16.666 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:16.666 Running I/O for 10 seconds... 00:07:17.619 Latency(us) 00:07:17.619 [2024-11-19T17:06:10.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.619 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.619 Nvme0n1 : 1.00 7267.00 28.39 0.00 0.00 0.00 0.00 0.00 00:07:17.619 [2024-11-19T17:06:10.152Z] =================================================================================================================== 00:07:17.619 [2024-11-19T17:06:10.153Z] Total : 7267.00 28.39 0.00 0.00 0.00 0.00 0.00 00:07:17.620 00:07:18.553 17:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 239d9be9-a8b0-4a37-b5b7-4c8f6641f92d 00:07:18.811 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.811 Nvme0n1 : 2.00 7278.00 28.43 0.00 0.00 0.00 0.00 0.00 00:07:18.811 [2024-11-19T17:06:11.344Z] =================================================================================================================== 00:07:18.811 [2024-11-19T17:06:11.344Z] Total : 7278.00 28.43 0.00 0.00 0.00 0.00 0.00 00:07:18.811 00:07:18.811 true 00:07:19.069 17:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:19.069 17:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 239d9be9-a8b0-4a37-b5b7-4c8f6641f92d 00:07:19.327 17:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:19.327 17:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:19.327 17:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 66485 00:07:19.585 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.585 Nvme0n1 : 3.00 7299.67 28.51 0.00 0.00 0.00 0.00 0.00 00:07:19.585 [2024-11-19T17:06:12.118Z] =================================================================================================================== 00:07:19.585 [2024-11-19T17:06:12.118Z] Total : 7299.67 28.51 0.00 0.00 0.00 0.00 0.00 00:07:19.585 00:07:20.959 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.959 Nvme0n1 : 4.00 7275.50 28.42 0.00 0.00 0.00 0.00 0.00 00:07:20.959 [2024-11-19T17:06:13.492Z] =================================================================================================================== 00:07:20.959 [2024-11-19T17:06:13.492Z] Total : 7275.50 28.42 0.00 0.00 0.00 0.00 0.00 00:07:20.959 00:07:21.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.892 Nvme0n1 : 5.00 7306.60 28.54 0.00 0.00 0.00 0.00 0.00 00:07:21.892 [2024-11-19T17:06:14.425Z] =================================================================================================================== 00:07:21.892 [2024-11-19T17:06:14.425Z] Total : 7306.60 28.54 0.00 0.00 0.00 0.00 0.00 00:07:21.892 00:07:22.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.828 Nvme0n1 : 6.00 7302.83 28.53 0.00 0.00 0.00 0.00 0.00 00:07:22.828 [2024-11-19T17:06:15.361Z] =================================================================================================================== 00:07:22.828 [2024-11-19T17:06:15.361Z] Total : 7302.83 28.53 0.00 0.00 0.00 0.00 0.00 00:07:22.828 00:07:23.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.761 Nvme0n1 : 7.00 7321.43 28.60 0.00 0.00 0.00 0.00 0.00 00:07:23.761 [2024-11-19T17:06:16.294Z] =================================================================================================================== 00:07:23.761 [2024-11-19T17:06:16.294Z] Total : 7321.43 28.60 0.00 0.00 0.00 0.00 0.00 00:07:23.761 00:07:24.694 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.694 Nvme0n1 : 8.00 7367.75 28.78 0.00 0.00 0.00 0.00 0.00 00:07:24.694 [2024-11-19T17:06:17.227Z] =================================================================================================================== 00:07:24.694 [2024-11-19T17:06:17.227Z] Total : 7367.75 28.78 0.00 0.00 0.00 0.00 0.00 00:07:24.694 00:07:25.628 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.628 Nvme0n1 : 9.00 7396.00 28.89 0.00 0.00 0.00 0.00 0.00 00:07:25.628 [2024-11-19T17:06:18.161Z] =================================================================================================================== 00:07:25.628 [2024-11-19T17:06:18.161Z] Total : 7396.00 28.89 0.00 0.00 0.00 0.00 0.00 00:07:25.628 00:07:26.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.620 Nvme0n1 : 10.00 7390.20 28.87 0.00 0.00 0.00 0.00 0.00 00:07:26.620 [2024-11-19T17:06:19.153Z] =================================================================================================================== 00:07:26.620 [2024-11-19T17:06:19.153Z] Total : 7390.20 28.87 0.00 0.00 0.00 0.00 0.00 00:07:26.620 00:07:26.620 00:07:26.620 Latency(us) 00:07:26.620 [2024-11-19T17:06:19.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.620 Nvme0n1 : 10.02 7397.83 28.90 0.00 0.00 17289.98 7387.69 77689.95 00:07:26.620 [2024-11-19T17:06:19.153Z] =================================================================================================================== 00:07:26.620 [2024-11-19T17:06:19.153Z] Total : 7397.83 28.90 0.00 0.00 17289.98 7387.69 77689.95 00:07:26.620 { 00:07:26.620 "results": [ 00:07:26.620 { 00:07:26.620 "job": "Nvme0n1", 00:07:26.620 "core_mask": "0x2", 00:07:26.620 "workload": "randwrite", 00:07:26.620 "status": "finished", 00:07:26.620 "queue_depth": 128, 00:07:26.620 "io_size": 4096, 00:07:26.620 "runtime": 10.019428, 00:07:26.620 "iops": 7397.827500731579, 00:07:26.620 "mibps": 28.89776367473273, 00:07:26.620 "io_failed": 0, 00:07:26.620 "io_timeout": 0, 00:07:26.620 "avg_latency_us": 17289.984107380707, 00:07:26.620 "min_latency_us": 7387.694545454546, 00:07:26.620 "max_latency_us": 77689.9490909091 00:07:26.620 } 00:07:26.620 ], 00:07:26.620 "core_count": 1 00:07:26.620 } 00:07:26.878 17:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66437 00:07:26.879 17:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 66437 ']' 00:07:26.879 17:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 66437 00:07:26.879 17:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:26.879 17:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.879 17:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66437 00:07:26.879 17:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:26.879 17:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:26.879 killing process with pid 66437 00:07:26.879 17:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66437' 00:07:26.879 17:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 66437 00:07:26.879 Received shutdown signal, test time was about 10.000000 seconds 00:07:26.879 00:07:26.879 Latency(us) 00:07:26.879 [2024-11-19T17:06:19.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.879 [2024-11-19T17:06:19.412Z] =================================================================================================================== 00:07:26.879 [2024-11-19T17:06:19.412Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:26.879 17:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 66437 00:07:26.879 17:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:27.445 17:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:27.445 17:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 239d9be9-a8b0-4a37-b5b7-4c8f6641f92d 00:07:27.445 17:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:27.704 17:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:27.704 17:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:27.704 17:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:27.963 [2024-11-19 17:06:20.415390] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:27.963 17:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 239d9be9-a8b0-4a37-b5b7-4c8f6641f92d 00:07:27.963 17:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:27.963 17:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 239d9be9-a8b0-4a37-b5b7-4c8f6641f92d 00:07:27.963 17:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.963 17:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.963 17:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.963 17:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.963 17:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.963 17:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.963 17:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.963 17:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:27.963 17:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 239d9be9-a8b0-4a37-b5b7-4c8f6641f92d 00:07:28.221 2024/11/19 17:06:20 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:239d9be9-a8b0-4a37-b5b7-4c8f6641f92d], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:07:28.221 request: 00:07:28.221 { 00:07:28.221 "method": "bdev_lvol_get_lvstores", 00:07:28.221 "params": { 00:07:28.221 "uuid": "239d9be9-a8b0-4a37-b5b7-4c8f6641f92d" 00:07:28.221 } 00:07:28.221 } 00:07:28.221 Got JSON-RPC error response 00:07:28.221 GoRPCClient: error on JSON-RPC call 00:07:28.221 17:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:28.221 17:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:28.221 17:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:28.221 17:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:28.221 17:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:28.480 aio_bdev 00:07:28.480 17:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7d54cffd-fa59-4d81-812b-07085e8d58d2 00:07:28.480 17:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=7d54cffd-fa59-4d81-812b-07085e8d58d2 00:07:28.480 17:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:28.480 17:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:28.480 17:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:28.480 17:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:28.480 17:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:28.738 17:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7d54cffd-fa59-4d81-812b-07085e8d58d2 -t 2000 00:07:28.997 [ 00:07:28.997 { 00:07:28.997 "aliases": [ 00:07:28.997 "lvs/lvol" 00:07:28.997 ], 00:07:28.997 "assigned_rate_limits": { 00:07:28.997 "r_mbytes_per_sec": 0, 00:07:28.997 "rw_ios_per_sec": 0, 00:07:28.997 "rw_mbytes_per_sec": 0, 00:07:28.997 "w_mbytes_per_sec": 0 00:07:28.997 }, 00:07:28.997 "block_size": 4096, 00:07:28.997 "claimed": false, 00:07:28.997 "driver_specific": { 00:07:28.997 "lvol": { 00:07:28.997 "base_bdev": "aio_bdev", 00:07:28.997 "clone": false, 00:07:28.997 "esnap_clone": false, 00:07:28.997 "lvol_store_uuid": "239d9be9-a8b0-4a37-b5b7-4c8f6641f92d", 00:07:28.997 "num_allocated_clusters": 38, 00:07:28.997 "snapshot": false, 00:07:28.997 "thin_provision": false 00:07:28.997 } 00:07:28.997 }, 00:07:28.997 "name": "7d54cffd-fa59-4d81-812b-07085e8d58d2", 00:07:28.997 "num_blocks": 38912, 00:07:28.997 "product_name": "Logical Volume", 00:07:28.997 "supported_io_types": { 00:07:28.997 "abort": false, 00:07:28.997 "compare": false, 00:07:28.997 "compare_and_write": false, 00:07:28.997 "copy": false, 00:07:28.997 "flush": false, 00:07:28.997 "get_zone_info": false, 00:07:28.997 "nvme_admin": false, 00:07:28.997 "nvme_io": false, 00:07:28.997 "nvme_io_md": false, 00:07:28.997 "nvme_iov_md": false, 00:07:28.997 "read": true, 00:07:28.997 "reset": true, 00:07:28.997 "seek_data": true, 00:07:28.997 "seek_hole": true, 00:07:28.997 "unmap": true, 00:07:28.997 "write": true, 00:07:28.997 "write_zeroes": true, 00:07:28.997 "zcopy": false, 00:07:28.997 "zone_append": false, 00:07:28.997 "zone_management": false 00:07:28.997 }, 00:07:28.997 "uuid": "7d54cffd-fa59-4d81-812b-07085e8d58d2", 00:07:28.997 "zoned": false 00:07:28.997 } 00:07:28.997 ] 00:07:28.997 17:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:28.997 17:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 239d9be9-a8b0-4a37-b5b7-4c8f6641f92d 00:07:28.997 17:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:29.256 17:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:29.256 17:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 239d9be9-a8b0-4a37-b5b7-4c8f6641f92d 00:07:29.256 17:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:29.823 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:29.823 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7d54cffd-fa59-4d81-812b-07085e8d58d2 00:07:29.823 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 239d9be9-a8b0-4a37-b5b7-4c8f6641f92d 00:07:30.390 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:30.648 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:30.906 ************************************ 00:07:30.906 END TEST lvs_grow_clean 00:07:30.906 ************************************ 00:07:30.906 00:07:30.906 real 0m18.649s 00:07:30.906 user 0m17.883s 00:07:30.906 sys 0m2.250s 00:07:30.906 17:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.906 17:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:30.906 17:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:30.906 17:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:30.906 17:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.906 17:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:30.906 ************************************ 00:07:30.906 START TEST lvs_grow_dirty 00:07:30.906 ************************************ 00:07:30.906 17:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:30.906 17:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:30.906 17:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:30.906 17:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:30.906 17:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:30.906 17:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:30.906 17:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:30.906 17:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:30.906 17:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:30.906 17:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:31.474 17:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:31.474 17:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:31.732 17:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a87bb87f-e521-4242-a6a3-92f88b0d311f 00:07:31.732 17:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a87bb87f-e521-4242-a6a3-92f88b0d311f 00:07:31.732 17:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:31.991 17:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:31.991 17:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:31.991 17:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a87bb87f-e521-4242-a6a3-92f88b0d311f lvol 150 00:07:32.250 17:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=2e33b7d0-1bed-4f15-b9a7-f308aa4d65c4 00:07:32.250 17:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:32.250 17:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:32.508 [2024-11-19 17:06:24.892716] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:32.508 [2024-11-19 17:06:24.892870] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:32.508 true 00:07:32.508 17:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a87bb87f-e521-4242-a6a3-92f88b0d311f 00:07:32.508 17:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:32.767 17:06:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:32.767 17:06:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:33.026 17:06:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2e33b7d0-1bed-4f15-b9a7-f308aa4d65c4 00:07:33.284 17:06:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:33.543 [2024-11-19 17:06:25.917193] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:33.543 17:06:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:33.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:33.802 17:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:33.802 17:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66889 00:07:33.802 17:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:33.802 17:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66889 /var/tmp/bdevperf.sock 00:07:33.802 17:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 66889 ']' 00:07:33.802 17:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:33.802 17:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.802 17:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:33.802 17:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.802 17:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:33.802 [2024-11-19 17:06:26.237509] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:33.802 [2024-11-19 17:06:26.237619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66889 ] 00:07:34.060 [2024-11-19 17:06:26.386888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.060 [2024-11-19 17:06:26.447006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.996 17:06:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.996 17:06:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:34.996 17:06:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:35.254 Nvme0n1 00:07:35.254 17:06:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:35.513 [ 00:07:35.513 { 00:07:35.513 "aliases": [ 00:07:35.513 "2e33b7d0-1bed-4f15-b9a7-f308aa4d65c4" 00:07:35.513 ], 00:07:35.513 "assigned_rate_limits": { 00:07:35.513 "r_mbytes_per_sec": 0, 00:07:35.513 "rw_ios_per_sec": 0, 00:07:35.513 "rw_mbytes_per_sec": 0, 00:07:35.513 "w_mbytes_per_sec": 0 00:07:35.513 }, 00:07:35.513 "block_size": 4096, 00:07:35.513 "claimed": false, 00:07:35.513 "driver_specific": { 00:07:35.513 "mp_policy": "active_passive", 00:07:35.513 "nvme": [ 00:07:35.513 { 00:07:35.513 "ctrlr_data": { 00:07:35.513 "ana_reporting": false, 00:07:35.513 "cntlid": 1, 00:07:35.513 "firmware_revision": "25.01", 00:07:35.513 "model_number": "SPDK bdev Controller", 00:07:35.513 "multi_ctrlr": true, 00:07:35.513 "oacs": { 00:07:35.513 "firmware": 0, 00:07:35.513 "format": 0, 00:07:35.513 "ns_manage": 0, 00:07:35.513 "security": 0 00:07:35.513 }, 00:07:35.513 "serial_number": "SPDK0", 00:07:35.513 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:35.513 "vendor_id": "0x8086" 00:07:35.513 }, 00:07:35.513 "ns_data": { 00:07:35.513 "can_share": true, 00:07:35.513 "id": 1 00:07:35.513 }, 00:07:35.513 "trid": { 00:07:35.513 "adrfam": "IPv4", 00:07:35.513 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:35.513 "traddr": "10.0.0.3", 00:07:35.513 "trsvcid": "4420", 00:07:35.513 "trtype": "TCP" 00:07:35.513 }, 00:07:35.513 "vs": { 00:07:35.513 "nvme_version": "1.3" 00:07:35.513 } 00:07:35.513 } 00:07:35.513 ] 00:07:35.513 }, 00:07:35.513 "memory_domains": [ 00:07:35.513 { 00:07:35.513 "dma_device_id": "system", 00:07:35.513 "dma_device_type": 1 00:07:35.513 } 00:07:35.513 ], 00:07:35.513 "name": "Nvme0n1", 00:07:35.513 "num_blocks": 38912, 00:07:35.513 "numa_id": -1, 00:07:35.513 "product_name": "NVMe disk", 00:07:35.513 "supported_io_types": { 00:07:35.513 "abort": true, 00:07:35.513 "compare": true, 00:07:35.513 "compare_and_write": true, 00:07:35.513 "copy": true, 00:07:35.513 "flush": true, 00:07:35.514 "get_zone_info": false, 00:07:35.514 "nvme_admin": true, 00:07:35.514 "nvme_io": true, 00:07:35.514 "nvme_io_md": false, 00:07:35.514 "nvme_iov_md": false, 00:07:35.514 "read": true, 00:07:35.514 "reset": true, 00:07:35.514 "seek_data": false, 00:07:35.514 "seek_hole": false, 00:07:35.514 "unmap": true, 00:07:35.514 "write": true, 00:07:35.514 "write_zeroes": true, 00:07:35.514 "zcopy": false, 00:07:35.514 "zone_append": false, 00:07:35.514 "zone_management": false 00:07:35.514 }, 00:07:35.514 "uuid": "2e33b7d0-1bed-4f15-b9a7-f308aa4d65c4", 00:07:35.514 "zoned": false 00:07:35.514 } 00:07:35.514 ] 00:07:35.514 17:06:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:35.514 17:06:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66942 00:07:35.514 17:06:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:35.514 Running I/O for 10 seconds... 00:07:36.452 Latency(us) 00:07:36.452 [2024-11-19T17:06:28.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.452 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.452 Nvme0n1 : 1.00 7907.00 30.89 0.00 0.00 0.00 0.00 0.00 00:07:36.452 [2024-11-19T17:06:28.985Z] =================================================================================================================== 00:07:36.452 [2024-11-19T17:06:28.985Z] Total : 7907.00 30.89 0.00 0.00 0.00 0.00 0.00 00:07:36.452 00:07:37.387 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a87bb87f-e521-4242-a6a3-92f88b0d311f 00:07:37.645 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.645 Nvme0n1 : 2.00 7732.50 30.21 0.00 0.00 0.00 0.00 0.00 00:07:37.645 [2024-11-19T17:06:30.179Z] =================================================================================================================== 00:07:37.646 [2024-11-19T17:06:30.179Z] Total : 7732.50 30.21 0.00 0.00 0.00 0.00 0.00 00:07:37.646 00:07:37.646 true 00:07:37.904 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a87bb87f-e521-4242-a6a3-92f88b0d311f 00:07:37.904 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:38.162 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:38.162 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:38.162 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66942 00:07:38.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.421 Nvme0n1 : 3.00 7647.33 29.87 0.00 0.00 0.00 0.00 0.00 00:07:38.421 [2024-11-19T17:06:30.954Z] =================================================================================================================== 00:07:38.421 [2024-11-19T17:06:30.954Z] Total : 7647.33 29.87 0.00 0.00 0.00 0.00 0.00 00:07:38.421 00:07:39.799 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.799 Nvme0n1 : 4.00 7652.25 29.89 0.00 0.00 0.00 0.00 0.00 00:07:39.799 [2024-11-19T17:06:32.332Z] =================================================================================================================== 00:07:39.799 [2024-11-19T17:06:32.332Z] Total : 7652.25 29.89 0.00 0.00 0.00 0.00 0.00 00:07:39.799 00:07:40.736 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.736 Nvme0n1 : 5.00 7643.20 29.86 0.00 0.00 0.00 0.00 0.00 00:07:40.736 [2024-11-19T17:06:33.269Z] =================================================================================================================== 00:07:40.736 [2024-11-19T17:06:33.269Z] Total : 7643.20 29.86 0.00 0.00 0.00 0.00 0.00 00:07:40.736 00:07:41.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.671 Nvme0n1 : 6.00 7465.67 29.16 0.00 0.00 0.00 0.00 0.00 00:07:41.671 [2024-11-19T17:06:34.204Z] =================================================================================================================== 00:07:41.671 [2024-11-19T17:06:34.204Z] Total : 7465.67 29.16 0.00 0.00 0.00 0.00 0.00 00:07:41.671 00:07:42.607 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.607 Nvme0n1 : 7.00 7441.00 29.07 0.00 0.00 0.00 0.00 0.00 00:07:42.607 [2024-11-19T17:06:35.140Z] =================================================================================================================== 00:07:42.607 [2024-11-19T17:06:35.140Z] Total : 7441.00 29.07 0.00 0.00 0.00 0.00 0.00 00:07:42.607 00:07:43.541 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.541 Nvme0n1 : 8.00 7383.62 28.84 0.00 0.00 0.00 0.00 0.00 00:07:43.541 [2024-11-19T17:06:36.074Z] =================================================================================================================== 00:07:43.541 [2024-11-19T17:06:36.074Z] Total : 7383.62 28.84 0.00 0.00 0.00 0.00 0.00 00:07:43.541 00:07:44.490 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.490 Nvme0n1 : 9.00 7335.56 28.65 0.00 0.00 0.00 0.00 0.00 00:07:44.490 [2024-11-19T17:06:37.023Z] =================================================================================================================== 00:07:44.490 [2024-11-19T17:06:37.023Z] Total : 7335.56 28.65 0.00 0.00 0.00 0.00 0.00 00:07:44.490 00:07:45.426 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.426 Nvme0n1 : 10.00 7291.20 28.48 0.00 0.00 0.00 0.00 0.00 00:07:45.426 [2024-11-19T17:06:37.959Z] =================================================================================================================== 00:07:45.426 [2024-11-19T17:06:37.959Z] Total : 7291.20 28.48 0.00 0.00 0.00 0.00 0.00 00:07:45.426 00:07:45.426 00:07:45.426 Latency(us) 00:07:45.426 [2024-11-19T17:06:37.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:45.426 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.426 Nvme0n1 : 10.01 7293.65 28.49 0.00 0.00 17537.71 7238.75 107240.73 00:07:45.426 [2024-11-19T17:06:37.959Z] =================================================================================================================== 00:07:45.426 [2024-11-19T17:06:37.959Z] Total : 7293.65 28.49 0.00 0.00 17537.71 7238.75 107240.73 00:07:45.426 { 00:07:45.426 "results": [ 00:07:45.426 { 00:07:45.426 "job": "Nvme0n1", 00:07:45.426 "core_mask": "0x2", 00:07:45.426 "workload": "randwrite", 00:07:45.426 "status": "finished", 00:07:45.426 "queue_depth": 128, 00:07:45.426 "io_size": 4096, 00:07:45.426 "runtime": 10.014197, 00:07:45.426 "iops": 7293.645211892676, 00:07:45.426 "mibps": 28.490801608955767, 00:07:45.426 "io_failed": 0, 00:07:45.426 "io_timeout": 0, 00:07:45.426 "avg_latency_us": 17537.705715025393, 00:07:45.426 "min_latency_us": 7238.749090909091, 00:07:45.426 "max_latency_us": 107240.72727272728 00:07:45.426 } 00:07:45.426 ], 00:07:45.426 "core_count": 1 00:07:45.426 } 00:07:45.686 17:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66889 00:07:45.686 17:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 66889 ']' 00:07:45.686 17:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 66889 00:07:45.686 17:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:45.686 17:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.686 17:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66889 00:07:45.686 killing process with pid 66889 00:07:45.686 Received shutdown signal, test time was about 10.000000 seconds 00:07:45.686 00:07:45.686 Latency(us) 00:07:45.686 [2024-11-19T17:06:38.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:45.686 [2024-11-19T17:06:38.219Z] =================================================================================================================== 00:07:45.686 [2024-11-19T17:06:38.219Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:45.686 17:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:45.686 17:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:45.686 17:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66889' 00:07:45.686 17:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 66889 00:07:45.686 17:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 66889 00:07:45.686 17:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:46.254 17:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:46.255 17:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a87bb87f-e521-4242-a6a3-92f88b0d311f 00:07:46.255 17:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:46.514 17:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:46.514 17:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:46.514 17:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 66283 00:07:46.514 17:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 66283 00:07:46.773 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 66283 Killed "${NVMF_APP[@]}" "$@" 00:07:46.773 17:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:46.773 17:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:46.773 17:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:46.773 17:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:46.773 17:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:46.773 17:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=67105 00:07:46.773 17:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:46.773 17:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 67105 00:07:46.773 17:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 67105 ']' 00:07:46.773 17:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.773 17:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.773 17:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.773 17:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.773 17:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:46.773 [2024-11-19 17:06:39.132459] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:46.773 [2024-11-19 17:06:39.132607] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.773 [2024-11-19 17:06:39.284390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.032 [2024-11-19 17:06:39.343030] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:47.032 [2024-11-19 17:06:39.343103] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:47.032 [2024-11-19 17:06:39.343124] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:47.032 [2024-11-19 17:06:39.343132] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:47.032 [2024-11-19 17:06:39.343141] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:47.032 [2024-11-19 17:06:39.343593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.598 17:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.598 17:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:47.598 17:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:47.598 17:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:47.598 17:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:47.857 17:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.857 17:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:47.857 [2024-11-19 17:06:40.385142] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:47.857 [2024-11-19 17:06:40.385564] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:47.857 [2024-11-19 17:06:40.385826] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:48.115 17:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:48.115 17:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 2e33b7d0-1bed-4f15-b9a7-f308aa4d65c4 00:07:48.115 17:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=2e33b7d0-1bed-4f15-b9a7-f308aa4d65c4 00:07:48.115 17:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:48.115 17:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:48.115 17:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:48.115 17:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:48.115 17:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:48.375 17:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2e33b7d0-1bed-4f15-b9a7-f308aa4d65c4 -t 2000 00:07:48.634 [ 00:07:48.634 { 00:07:48.634 "aliases": [ 00:07:48.634 "lvs/lvol" 00:07:48.634 ], 00:07:48.634 "assigned_rate_limits": { 00:07:48.634 "r_mbytes_per_sec": 0, 00:07:48.634 "rw_ios_per_sec": 0, 00:07:48.634 "rw_mbytes_per_sec": 0, 00:07:48.634 "w_mbytes_per_sec": 0 00:07:48.634 }, 00:07:48.634 "block_size": 4096, 00:07:48.634 "claimed": false, 00:07:48.634 "driver_specific": { 00:07:48.634 "lvol": { 00:07:48.634 "base_bdev": "aio_bdev", 00:07:48.634 "clone": false, 00:07:48.634 "esnap_clone": false, 00:07:48.634 "lvol_store_uuid": "a87bb87f-e521-4242-a6a3-92f88b0d311f", 00:07:48.634 "num_allocated_clusters": 38, 00:07:48.634 "snapshot": false, 00:07:48.634 "thin_provision": false 00:07:48.634 } 00:07:48.634 }, 00:07:48.634 "name": "2e33b7d0-1bed-4f15-b9a7-f308aa4d65c4", 00:07:48.634 "num_blocks": 38912, 00:07:48.634 "product_name": "Logical Volume", 00:07:48.634 "supported_io_types": { 00:07:48.634 "abort": false, 00:07:48.634 "compare": false, 00:07:48.634 "compare_and_write": false, 00:07:48.634 "copy": false, 00:07:48.634 "flush": false, 00:07:48.634 "get_zone_info": false, 00:07:48.634 "nvme_admin": false, 00:07:48.634 "nvme_io": false, 00:07:48.634 "nvme_io_md": false, 00:07:48.634 "nvme_iov_md": false, 00:07:48.634 "read": true, 00:07:48.634 "reset": true, 00:07:48.634 "seek_data": true, 00:07:48.634 "seek_hole": true, 00:07:48.634 "unmap": true, 00:07:48.634 "write": true, 00:07:48.634 "write_zeroes": true, 00:07:48.634 "zcopy": false, 00:07:48.634 "zone_append": false, 00:07:48.634 "zone_management": false 00:07:48.634 }, 00:07:48.634 "uuid": "2e33b7d0-1bed-4f15-b9a7-f308aa4d65c4", 00:07:48.634 "zoned": false 00:07:48.634 } 00:07:48.634 ] 00:07:48.634 17:06:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:48.634 17:06:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:48.634 17:06:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a87bb87f-e521-4242-a6a3-92f88b0d311f 00:07:48.892 17:06:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:48.892 17:06:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a87bb87f-e521-4242-a6a3-92f88b0d311f 00:07:48.892 17:06:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:49.458 17:06:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:49.458 17:06:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:49.458 [2024-11-19 17:06:41.970266] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:49.717 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a87bb87f-e521-4242-a6a3-92f88b0d311f 00:07:49.717 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:49.717 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a87bb87f-e521-4242-a6a3-92f88b0d311f 00:07:49.717 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:49.717 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.717 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:49.717 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.717 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:49.717 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.717 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:49.717 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:49.717 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a87bb87f-e521-4242-a6a3-92f88b0d311f 00:07:49.976 2024/11/19 17:06:42 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:a87bb87f-e521-4242-a6a3-92f88b0d311f], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:07:49.976 request: 00:07:49.976 { 00:07:49.976 "method": "bdev_lvol_get_lvstores", 00:07:49.976 "params": { 00:07:49.976 "uuid": "a87bb87f-e521-4242-a6a3-92f88b0d311f" 00:07:49.976 } 00:07:49.976 } 00:07:49.976 Got JSON-RPC error response 00:07:49.976 GoRPCClient: error on JSON-RPC call 00:07:49.976 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:49.976 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:49.976 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:49.976 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:49.976 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:50.234 aio_bdev 00:07:50.234 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2e33b7d0-1bed-4f15-b9a7-f308aa4d65c4 00:07:50.234 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=2e33b7d0-1bed-4f15-b9a7-f308aa4d65c4 00:07:50.234 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:50.234 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:50.234 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:50.234 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:50.234 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:50.493 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2e33b7d0-1bed-4f15-b9a7-f308aa4d65c4 -t 2000 00:07:50.751 [ 00:07:50.751 { 00:07:50.751 "aliases": [ 00:07:50.751 "lvs/lvol" 00:07:50.751 ], 00:07:50.751 "assigned_rate_limits": { 00:07:50.751 "r_mbytes_per_sec": 0, 00:07:50.751 "rw_ios_per_sec": 0, 00:07:50.751 "rw_mbytes_per_sec": 0, 00:07:50.751 "w_mbytes_per_sec": 0 00:07:50.751 }, 00:07:50.751 "block_size": 4096, 00:07:50.751 "claimed": false, 00:07:50.751 "driver_specific": { 00:07:50.751 "lvol": { 00:07:50.751 "base_bdev": "aio_bdev", 00:07:50.751 "clone": false, 00:07:50.751 "esnap_clone": false, 00:07:50.751 "lvol_store_uuid": "a87bb87f-e521-4242-a6a3-92f88b0d311f", 00:07:50.751 "num_allocated_clusters": 38, 00:07:50.751 "snapshot": false, 00:07:50.751 "thin_provision": false 00:07:50.751 } 00:07:50.751 }, 00:07:50.751 "name": "2e33b7d0-1bed-4f15-b9a7-f308aa4d65c4", 00:07:50.751 "num_blocks": 38912, 00:07:50.751 "product_name": "Logical Volume", 00:07:50.751 "supported_io_types": { 00:07:50.751 "abort": false, 00:07:50.751 "compare": false, 00:07:50.751 "compare_and_write": false, 00:07:50.751 "copy": false, 00:07:50.751 "flush": false, 00:07:50.751 "get_zone_info": false, 00:07:50.751 "nvme_admin": false, 00:07:50.751 "nvme_io": false, 00:07:50.751 "nvme_io_md": false, 00:07:50.751 "nvme_iov_md": false, 00:07:50.751 "read": true, 00:07:50.751 "reset": true, 00:07:50.751 "seek_data": true, 00:07:50.751 "seek_hole": true, 00:07:50.751 "unmap": true, 00:07:50.751 "write": true, 00:07:50.751 "write_zeroes": true, 00:07:50.751 "zcopy": false, 00:07:50.751 "zone_append": false, 00:07:50.751 "zone_management": false 00:07:50.751 }, 00:07:50.751 "uuid": "2e33b7d0-1bed-4f15-b9a7-f308aa4d65c4", 00:07:50.751 "zoned": false 00:07:50.751 } 00:07:50.751 ] 00:07:50.751 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:50.751 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a87bb87f-e521-4242-a6a3-92f88b0d311f 00:07:50.751 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:51.009 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:51.009 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a87bb87f-e521-4242-a6a3-92f88b0d311f 00:07:51.009 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:51.575 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:51.575 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2e33b7d0-1bed-4f15-b9a7-f308aa4d65c4 00:07:51.575 17:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a87bb87f-e521-4242-a6a3-92f88b0d311f 00:07:52.145 17:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:52.145 17:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:52.712 00:07:52.712 real 0m21.584s 00:07:52.712 user 0m43.160s 00:07:52.712 sys 0m8.660s 00:07:52.712 17:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.712 ************************************ 00:07:52.712 END TEST lvs_grow_dirty 00:07:52.712 ************************************ 00:07:52.712 17:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:52.712 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:52.712 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:52.712 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:52.712 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:52.712 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:52.712 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:52.712 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:52.712 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:52.712 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:52.712 nvmf_trace.0 00:07:52.712 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:52.712 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:52.712 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:52.712 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:52.970 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:52.970 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:52.970 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:52.970 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:52.970 rmmod nvme_tcp 00:07:52.970 rmmod nvme_fabrics 00:07:52.970 rmmod nvme_keyring 00:07:52.970 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:52.970 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:52.970 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:52.970 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 67105 ']' 00:07:52.970 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 67105 00:07:52.970 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 67105 ']' 00:07:52.970 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 67105 00:07:52.970 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:52.970 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.970 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67105 00:07:52.970 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.970 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.970 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67105' 00:07:52.970 killing process with pid 67105 00:07:52.970 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 67105 00:07:52.970 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 67105 00:07:53.537 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:53.537 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:53.537 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:53.537 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:53.537 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:53.537 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:53.537 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:53.537 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:53.537 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:53.537 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:53.537 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:53.537 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:53.537 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:53.537 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:53.537 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:53.537 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:53.537 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:53.537 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:53.537 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:53.537 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:53.537 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:53.537 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:53.537 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:53.537 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.537 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:53.537 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.537 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:07:53.537 00:07:53.537 real 0m42.745s 00:07:53.537 user 1m8.274s 00:07:53.537 sys 0m11.916s 00:07:53.537 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.537 ************************************ 00:07:53.537 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:53.537 END TEST nvmf_lvs_grow 00:07:53.537 ************************************ 00:07:53.537 17:06:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:53.537 17:06:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:53.537 17:06:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.537 17:06:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:53.797 ************************************ 00:07:53.797 START TEST nvmf_bdev_io_wait 00:07:53.797 ************************************ 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:53.797 * Looking for test storage... 00:07:53.797 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:53.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.797 --rc genhtml_branch_coverage=1 00:07:53.797 --rc genhtml_function_coverage=1 00:07:53.797 --rc genhtml_legend=1 00:07:53.797 --rc geninfo_all_blocks=1 00:07:53.797 --rc geninfo_unexecuted_blocks=1 00:07:53.797 00:07:53.797 ' 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:53.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.797 --rc genhtml_branch_coverage=1 00:07:53.797 --rc genhtml_function_coverage=1 00:07:53.797 --rc genhtml_legend=1 00:07:53.797 --rc geninfo_all_blocks=1 00:07:53.797 --rc geninfo_unexecuted_blocks=1 00:07:53.797 00:07:53.797 ' 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:53.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.797 --rc genhtml_branch_coverage=1 00:07:53.797 --rc genhtml_function_coverage=1 00:07:53.797 --rc genhtml_legend=1 00:07:53.797 --rc geninfo_all_blocks=1 00:07:53.797 --rc geninfo_unexecuted_blocks=1 00:07:53.797 00:07:53.797 ' 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:53.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.797 --rc genhtml_branch_coverage=1 00:07:53.797 --rc genhtml_function_coverage=1 00:07:53.797 --rc genhtml_legend=1 00:07:53.797 --rc geninfo_all_blocks=1 00:07:53.797 --rc geninfo_unexecuted_blocks=1 00:07:53.797 00:07:53.797 ' 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.797 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:53.798 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:53.798 Cannot find device "nvmf_init_br" 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:53.798 Cannot find device "nvmf_init_br2" 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:53.798 Cannot find device "nvmf_tgt_br" 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:07:53.798 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:54.057 Cannot find device "nvmf_tgt_br2" 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:54.057 Cannot find device "nvmf_init_br" 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:54.057 Cannot find device "nvmf_init_br2" 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:54.057 Cannot find device "nvmf_tgt_br" 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:54.057 Cannot find device "nvmf_tgt_br2" 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:54.057 Cannot find device "nvmf_br" 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:54.057 Cannot find device "nvmf_init_if" 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:54.057 Cannot find device "nvmf_init_if2" 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:54.057 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:54.057 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:54.057 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:54.058 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:54.058 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:54.058 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:54.058 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:54.058 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:54.058 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:54.316 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:54.316 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:54.316 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:54.316 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:54.316 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:54.316 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:54.316 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:54.316 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:54.316 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:54.317 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:54.317 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:07:54.317 00:07:54.317 --- 10.0.0.3 ping statistics --- 00:07:54.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.317 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:54.317 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:54.317 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:07:54.317 00:07:54.317 --- 10.0.0.4 ping statistics --- 00:07:54.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.317 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:54.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:54.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:07:54.317 00:07:54.317 --- 10.0.0.1 ping statistics --- 00:07:54.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.317 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:54.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:54.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:07:54.317 00:07:54.317 --- 10.0.0.2 ping statistics --- 00:07:54.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.317 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=67579 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 67579 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 67579 ']' 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.317 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:54.317 [2024-11-19 17:06:46.778235] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:54.317 [2024-11-19 17:06:46.778333] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.575 [2024-11-19 17:06:46.931778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:54.575 [2024-11-19 17:06:47.022347] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.575 [2024-11-19 17:06:47.022475] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.575 [2024-11-19 17:06:47.022489] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:54.575 [2024-11-19 17:06:47.022499] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:54.575 [2024-11-19 17:06:47.022509] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.575 [2024-11-19 17:06:47.024002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.575 [2024-11-19 17:06:47.024104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.575 [2024-11-19 17:06:47.024145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.575 [2024-11-19 17:06:47.024150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.511 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.511 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:55.511 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:55.511 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:55.511 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.511 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.511 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:55.511 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.511 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.511 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.511 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:55.511 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.511 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.511 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.511 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:55.511 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.771 [2024-11-19 17:06:48.048167] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.771 Malloc0 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.771 [2024-11-19 17:06:48.113949] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=67637 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:55.771 { 00:07:55.771 "params": { 00:07:55.771 "name": "Nvme$subsystem", 00:07:55.771 "trtype": "$TEST_TRANSPORT", 00:07:55.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:55.771 "adrfam": "ipv4", 00:07:55.771 "trsvcid": "$NVMF_PORT", 00:07:55.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:55.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:55.771 "hdgst": ${hdgst:-false}, 00:07:55.771 "ddgst": ${ddgst:-false} 00:07:55.771 }, 00:07:55.771 "method": "bdev_nvme_attach_controller" 00:07:55.771 } 00:07:55.771 EOF 00:07:55.771 )") 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=67639 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=67642 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:55.771 { 00:07:55.771 "params": { 00:07:55.771 "name": "Nvme$subsystem", 00:07:55.771 "trtype": "$TEST_TRANSPORT", 00:07:55.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:55.771 "adrfam": "ipv4", 00:07:55.771 "trsvcid": "$NVMF_PORT", 00:07:55.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:55.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:55.771 "hdgst": ${hdgst:-false}, 00:07:55.771 "ddgst": ${ddgst:-false} 00:07:55.771 }, 00:07:55.771 "method": "bdev_nvme_attach_controller" 00:07:55.771 } 00:07:55.771 EOF 00:07:55.771 )") 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:55.771 { 00:07:55.771 "params": { 00:07:55.771 "name": "Nvme$subsystem", 00:07:55.771 "trtype": "$TEST_TRANSPORT", 00:07:55.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:55.771 "adrfam": "ipv4", 00:07:55.771 "trsvcid": "$NVMF_PORT", 00:07:55.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:55.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:55.771 "hdgst": ${hdgst:-false}, 00:07:55.771 "ddgst": ${ddgst:-false} 00:07:55.771 }, 00:07:55.771 "method": "bdev_nvme_attach_controller" 00:07:55.771 } 00:07:55.771 EOF 00:07:55.771 )") 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:55.771 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:55.771 { 00:07:55.771 "params": { 00:07:55.771 "name": "Nvme$subsystem", 00:07:55.771 "trtype": "$TEST_TRANSPORT", 00:07:55.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:55.771 "adrfam": "ipv4", 00:07:55.771 "trsvcid": "$NVMF_PORT", 00:07:55.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:55.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:55.771 "hdgst": ${hdgst:-false}, 00:07:55.771 "ddgst": ${ddgst:-false} 00:07:55.772 }, 00:07:55.772 "method": "bdev_nvme_attach_controller" 00:07:55.772 } 00:07:55.772 EOF 00:07:55.772 )") 00:07:55.772 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:55.772 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=67644 00:07:55.772 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:55.772 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:55.772 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:55.772 "params": { 00:07:55.772 "name": "Nvme1", 00:07:55.772 "trtype": "tcp", 00:07:55.772 "traddr": "10.0.0.3", 00:07:55.772 "adrfam": "ipv4", 00:07:55.772 "trsvcid": "4420", 00:07:55.772 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:55.772 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:55.772 "hdgst": false, 00:07:55.772 "ddgst": false 00:07:55.772 }, 00:07:55.772 "method": "bdev_nvme_attach_controller" 00:07:55.772 }' 00:07:55.772 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:55.772 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:55.772 "params": { 00:07:55.772 "name": "Nvme1", 00:07:55.772 "trtype": "tcp", 00:07:55.772 "traddr": "10.0.0.3", 00:07:55.772 "adrfam": "ipv4", 00:07:55.772 "trsvcid": "4420", 00:07:55.772 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:55.772 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:55.772 "hdgst": false, 00:07:55.772 "ddgst": false 00:07:55.772 }, 00:07:55.772 "method": "bdev_nvme_attach_controller" 00:07:55.772 }' 00:07:55.772 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:55.772 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:55.772 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:55.772 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:55.772 "params": { 00:07:55.772 "name": "Nvme1", 00:07:55.772 "trtype": "tcp", 00:07:55.772 "traddr": "10.0.0.3", 00:07:55.772 "adrfam": "ipv4", 00:07:55.772 "trsvcid": "4420", 00:07:55.772 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:55.772 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:55.772 "hdgst": false, 00:07:55.772 "ddgst": false 00:07:55.772 }, 00:07:55.772 "method": "bdev_nvme_attach_controller" 00:07:55.772 }' 00:07:55.772 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:55.772 [2024-11-19 17:06:48.174724] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:55.772 [2024-11-19 17:06:48.174833] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:55.772 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:55.772 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:55.772 "params": { 00:07:55.772 "name": "Nvme1", 00:07:55.772 "trtype": "tcp", 00:07:55.772 "traddr": "10.0.0.3", 00:07:55.772 "adrfam": "ipv4", 00:07:55.772 "trsvcid": "4420", 00:07:55.772 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:55.772 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:55.772 "hdgst": false, 00:07:55.772 "ddgst": false 00:07:55.772 }, 00:07:55.772 "method": "bdev_nvme_attach_controller" 00:07:55.772 }' 00:07:55.772 [2024-11-19 17:06:48.201992] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:55.772 [2024-11-19 17:06:48.203035] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:55.772 [2024-11-19 17:06:48.206075] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:55.772 [2024-11-19 17:06:48.206159] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:55.772 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 67637 00:07:55.772 [2024-11-19 17:06:48.230865] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:55.772 [2024-11-19 17:06:48.231522] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:56.032 [2024-11-19 17:06:48.396819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.032 [2024-11-19 17:06:48.452855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:56.032 [2024-11-19 17:06:48.473038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.032 [2024-11-19 17:06:48.531156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:56.032 [2024-11-19 17:06:48.556104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.306 [2024-11-19 17:06:48.610883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:56.306 Running I/O for 1 seconds... 00:07:56.306 [2024-11-19 17:06:48.640529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.306 Running I/O for 1 seconds... 00:07:56.306 [2024-11-19 17:06:48.701045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:56.306 Running I/O for 1 seconds... 00:07:56.583 Running I/O for 1 seconds... 00:07:57.151 5650.00 IOPS, 22.07 MiB/s 00:07:57.151 Latency(us) 00:07:57.151 [2024-11-19T17:06:49.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.151 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:57.151 Nvme1n1 : 1.03 5633.52 22.01 0.00 0.00 22355.51 7804.74 38606.66 00:07:57.151 [2024-11-19T17:06:49.684Z] =================================================================================================================== 00:07:57.151 [2024-11-19T17:06:49.684Z] Total : 5633.52 22.01 0.00 0.00 22355.51 7804.74 38606.66 00:07:57.151 194296.00 IOPS, 758.97 MiB/s 00:07:57.151 Latency(us) 00:07:57.151 [2024-11-19T17:06:49.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.151 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:57.151 Nvme1n1 : 1.00 193935.33 757.56 0.00 0.00 656.46 290.44 1817.13 00:07:57.151 [2024-11-19T17:06:49.684Z] =================================================================================================================== 00:07:57.151 [2024-11-19T17:06:49.684Z] Total : 193935.33 757.56 0.00 0.00 656.46 290.44 1817.13 00:07:57.410 7826.00 IOPS, 30.57 MiB/s 00:07:57.410 Latency(us) 00:07:57.410 [2024-11-19T17:06:49.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.410 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:57.410 Nvme1n1 : 1.01 7880.38 30.78 0.00 0.00 16155.43 2844.86 23473.80 00:07:57.410 [2024-11-19T17:06:49.943Z] =================================================================================================================== 00:07:57.410 [2024-11-19T17:06:49.943Z] Total : 7880.38 30.78 0.00 0.00 16155.43 2844.86 23473.80 00:07:57.410 5835.00 IOPS, 22.79 MiB/s 00:07:57.410 Latency(us) 00:07:57.410 [2024-11-19T17:06:49.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.411 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:57.411 Nvme1n1 : 1.01 5943.16 23.22 0.00 0.00 21481.81 5719.51 52190.49 00:07:57.411 [2024-11-19T17:06:49.944Z] =================================================================================================================== 00:07:57.411 [2024-11-19T17:06:49.944Z] Total : 5943.16 23.22 0.00 0.00 21481.81 5719.51 52190.49 00:07:57.411 17:06:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 67639 00:07:57.411 17:06:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 67642 00:07:57.411 17:06:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 67644 00:07:57.670 17:06:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:57.670 17:06:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.670 17:06:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:57.670 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.670 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:57.670 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:57.670 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:57.670 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:57.670 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:57.670 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:57.670 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:57.670 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:57.670 rmmod nvme_tcp 00:07:57.670 rmmod nvme_fabrics 00:07:57.670 rmmod nvme_keyring 00:07:57.670 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:57.670 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:57.670 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:57.670 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 67579 ']' 00:07:57.670 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 67579 00:07:57.670 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 67579 ']' 00:07:57.670 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 67579 00:07:57.670 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:57.670 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:57.670 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67579 00:07:57.670 killing process with pid 67579 00:07:57.670 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:57.670 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:57.670 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67579' 00:07:57.670 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 67579 00:07:57.670 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 67579 00:07:57.929 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:57.929 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:57.929 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:57.929 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:57.929 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:57.929 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:57.929 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:57.929 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:57.929 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:57.929 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:57.929 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:57.929 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:57.929 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:57.929 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:57.929 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:58.189 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:58.189 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:58.189 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:58.189 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:58.189 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:58.189 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:58.189 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:58.189 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:58.189 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.189 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.189 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.189 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:07:58.189 ************************************ 00:07:58.189 END TEST nvmf_bdev_io_wait 00:07:58.189 ************************************ 00:07:58.189 00:07:58.189 real 0m4.550s 00:07:58.189 user 0m18.228s 00:07:58.189 sys 0m2.218s 00:07:58.189 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.189 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:58.189 17:06:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:58.189 17:06:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:58.189 17:06:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.189 17:06:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:58.189 ************************************ 00:07:58.189 START TEST nvmf_queue_depth 00:07:58.189 ************************************ 00:07:58.189 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:58.449 * Looking for test storage... 00:07:58.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:58.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.449 --rc genhtml_branch_coverage=1 00:07:58.449 --rc genhtml_function_coverage=1 00:07:58.449 --rc genhtml_legend=1 00:07:58.449 --rc geninfo_all_blocks=1 00:07:58.449 --rc geninfo_unexecuted_blocks=1 00:07:58.449 00:07:58.449 ' 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:58.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.449 --rc genhtml_branch_coverage=1 00:07:58.449 --rc genhtml_function_coverage=1 00:07:58.449 --rc genhtml_legend=1 00:07:58.449 --rc geninfo_all_blocks=1 00:07:58.449 --rc geninfo_unexecuted_blocks=1 00:07:58.449 00:07:58.449 ' 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:58.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.449 --rc genhtml_branch_coverage=1 00:07:58.449 --rc genhtml_function_coverage=1 00:07:58.449 --rc genhtml_legend=1 00:07:58.449 --rc geninfo_all_blocks=1 00:07:58.449 --rc geninfo_unexecuted_blocks=1 00:07:58.449 00:07:58.449 ' 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:58.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.449 --rc genhtml_branch_coverage=1 00:07:58.449 --rc genhtml_function_coverage=1 00:07:58.449 --rc genhtml_legend=1 00:07:58.449 --rc geninfo_all_blocks=1 00:07:58.449 --rc geninfo_unexecuted_blocks=1 00:07:58.449 00:07:58.449 ' 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:58.449 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:58.450 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:58.450 Cannot find device "nvmf_init_br" 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:58.450 Cannot find device "nvmf_init_br2" 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:58.450 Cannot find device "nvmf_tgt_br" 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:58.450 Cannot find device "nvmf_tgt_br2" 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:58.450 Cannot find device "nvmf_init_br" 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:58.450 Cannot find device "nvmf_init_br2" 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:58.450 Cannot find device "nvmf_tgt_br" 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:07:58.450 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:58.709 Cannot find device "nvmf_tgt_br2" 00:07:58.709 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:07:58.709 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:58.709 Cannot find device "nvmf_br" 00:07:58.709 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:07:58.709 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:58.709 Cannot find device "nvmf_init_if" 00:07:58.709 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:07:58.709 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:58.709 Cannot find device "nvmf_init_if2" 00:07:58.709 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:07:58.709 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:58.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:58.709 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:07:58.709 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:58.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:58.709 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:07:58.709 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:58.709 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:58.709 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:58.709 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:58.709 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:58.709 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:58.710 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:58.710 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:58.710 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:58.710 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:58.710 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:58.710 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:58.710 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:58.710 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:58.710 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:58.710 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:58.710 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:58.710 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:58.710 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:58.710 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:58.710 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:58.710 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:58.710 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:58.710 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:58.710 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:58.710 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:58.710 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:58.710 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:58.710 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:58.710 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:58.710 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:58.710 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:58.969 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:58.969 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:58.969 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:07:58.969 00:07:58.969 --- 10.0.0.3 ping statistics --- 00:07:58.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.969 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:07:58.969 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:58.969 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:58.969 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:07:58.969 00:07:58.969 --- 10.0.0.4 ping statistics --- 00:07:58.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.969 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:07:58.969 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:58.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:58.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:07:58.969 00:07:58.969 --- 10.0.0.1 ping statistics --- 00:07:58.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.969 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:07:58.969 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:58.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:58.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:07:58.969 00:07:58.969 --- 10.0.0.2 ping statistics --- 00:07:58.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.969 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:07:58.969 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:58.969 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:07:58.969 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:58.969 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:58.969 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:58.969 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:58.969 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:58.969 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:58.969 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:58.969 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:58.969 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:58.969 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:58.969 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:58.969 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=67938 00:07:58.969 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:58.969 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 67938 00:07:58.969 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 67938 ']' 00:07:58.969 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.969 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.969 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.969 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.969 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:58.969 [2024-11-19 17:06:51.354242] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:58.969 [2024-11-19 17:06:51.354358] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.228 [2024-11-19 17:06:51.514019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.228 [2024-11-19 17:06:51.571849] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:59.228 [2024-11-19 17:06:51.572194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:59.228 [2024-11-19 17:06:51.572230] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:59.228 [2024-11-19 17:06:51.572241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:59.228 [2024-11-19 17:06:51.572250] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:59.228 [2024-11-19 17:06:51.572791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.228 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.228 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:59.228 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:59.228 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:59.229 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.229 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.229 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:59.229 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.229 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.487 [2024-11-19 17:06:51.761080] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.487 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.487 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:59.487 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.487 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.487 Malloc0 00:07:59.487 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.487 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:59.487 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.487 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.487 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.487 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:59.487 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.487 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.488 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.488 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:59.488 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.488 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.488 [2024-11-19 17:06:51.813915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:59.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:59.488 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.488 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=67969 00:07:59.488 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:59.488 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:59.488 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 67969 /var/tmp/bdevperf.sock 00:07:59.488 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 67969 ']' 00:07:59.488 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:59.488 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.488 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:59.488 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.488 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.488 [2024-11-19 17:06:51.881416] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:59.488 [2024-11-19 17:06:51.881703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67969 ] 00:07:59.745 [2024-11-19 17:06:52.035963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.745 [2024-11-19 17:06:52.126145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.677 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.677 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:00.677 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:00.677 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.678 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.678 NVMe0n1 00:08:00.678 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.678 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:00.678 Running I/O for 10 seconds... 00:08:02.989 8465.00 IOPS, 33.07 MiB/s [2024-11-19T17:06:56.105Z] 8553.00 IOPS, 33.41 MiB/s [2024-11-19T17:06:57.483Z] 8537.00 IOPS, 33.35 MiB/s [2024-11-19T17:06:58.420Z] 8693.75 IOPS, 33.96 MiB/s [2024-11-19T17:06:59.358Z] 8703.80 IOPS, 34.00 MiB/s [2024-11-19T17:07:00.295Z] 8746.00 IOPS, 34.16 MiB/s [2024-11-19T17:07:01.230Z] 8768.29 IOPS, 34.25 MiB/s [2024-11-19T17:07:02.165Z] 8761.12 IOPS, 34.22 MiB/s [2024-11-19T17:07:03.127Z] 8846.56 IOPS, 34.56 MiB/s [2024-11-19T17:07:03.386Z] 8895.10 IOPS, 34.75 MiB/s 00:08:10.853 Latency(us) 00:08:10.853 [2024-11-19T17:07:03.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.853 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:10.853 Verification LBA range: start 0x0 length 0x4000 00:08:10.853 NVMe0n1 : 10.08 8923.57 34.86 0.00 0.00 114256.65 25856.93 118203.11 00:08:10.853 [2024-11-19T17:07:03.386Z] =================================================================================================================== 00:08:10.853 [2024-11-19T17:07:03.386Z] Total : 8923.57 34.86 0.00 0.00 114256.65 25856.93 118203.11 00:08:10.853 { 00:08:10.853 "results": [ 00:08:10.853 { 00:08:10.853 "job": "NVMe0n1", 00:08:10.853 "core_mask": "0x1", 00:08:10.853 "workload": "verify", 00:08:10.853 "status": "finished", 00:08:10.853 "verify_range": { 00:08:10.853 "start": 0, 00:08:10.853 "length": 16384 00:08:10.853 }, 00:08:10.853 "queue_depth": 1024, 00:08:10.853 "io_size": 4096, 00:08:10.853 "runtime": 10.082851, 00:08:10.853 "iops": 8923.56735213086, 00:08:10.853 "mibps": 34.85768496926117, 00:08:10.853 "io_failed": 0, 00:08:10.853 "io_timeout": 0, 00:08:10.853 "avg_latency_us": 114256.65462580009, 00:08:10.853 "min_latency_us": 25856.93090909091, 00:08:10.853 "max_latency_us": 118203.11272727273 00:08:10.853 } 00:08:10.853 ], 00:08:10.853 "core_count": 1 00:08:10.853 } 00:08:10.853 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 67969 00:08:10.853 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 67969 ']' 00:08:10.853 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 67969 00:08:10.853 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:10.853 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.853 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67969 00:08:10.853 killing process with pid 67969 00:08:10.853 Received shutdown signal, test time was about 10.000000 seconds 00:08:10.853 00:08:10.853 Latency(us) 00:08:10.853 [2024-11-19T17:07:03.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.853 [2024-11-19T17:07:03.386Z] =================================================================================================================== 00:08:10.853 [2024-11-19T17:07:03.386Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:10.853 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:10.853 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:10.853 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67969' 00:08:10.853 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 67969 00:08:10.853 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 67969 00:08:11.112 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:11.112 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:11.112 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:11.112 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:11.112 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:11.112 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:11.112 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:11.112 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:11.112 rmmod nvme_tcp 00:08:11.112 rmmod nvme_fabrics 00:08:11.113 rmmod nvme_keyring 00:08:11.113 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:11.113 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:11.113 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:11.113 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 67938 ']' 00:08:11.113 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 67938 00:08:11.113 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 67938 ']' 00:08:11.113 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 67938 00:08:11.113 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:11.113 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.113 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67938 00:08:11.113 killing process with pid 67938 00:08:11.113 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:11.113 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:11.113 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67938' 00:08:11.113 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 67938 00:08:11.113 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 67938 00:08:11.372 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:11.372 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:11.372 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:11.372 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:11.372 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:11.372 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:11.372 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:11.372 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:11.372 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:11.372 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:11.372 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:11.372 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:11.631 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:11.631 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:11.631 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:11.631 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:11.631 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:11.631 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:11.631 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:11.631 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:11.631 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:11.631 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:11.631 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:11.631 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.631 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.631 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.631 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:08:11.631 00:08:11.631 real 0m13.431s 00:08:11.631 user 0m22.521s 00:08:11.631 sys 0m2.580s 00:08:11.631 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.631 ************************************ 00:08:11.631 END TEST nvmf_queue_depth 00:08:11.631 ************************************ 00:08:11.631 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:11.631 17:07:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:11.631 17:07:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:11.631 17:07:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.631 17:07:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:11.631 ************************************ 00:08:11.631 START TEST nvmf_target_multipath 00:08:11.631 ************************************ 00:08:11.631 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:11.891 * Looking for test storage... 00:08:11.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:11.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.891 --rc genhtml_branch_coverage=1 00:08:11.891 --rc genhtml_function_coverage=1 00:08:11.891 --rc genhtml_legend=1 00:08:11.891 --rc geninfo_all_blocks=1 00:08:11.891 --rc geninfo_unexecuted_blocks=1 00:08:11.891 00:08:11.891 ' 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:11.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.891 --rc genhtml_branch_coverage=1 00:08:11.891 --rc genhtml_function_coverage=1 00:08:11.891 --rc genhtml_legend=1 00:08:11.891 --rc geninfo_all_blocks=1 00:08:11.891 --rc geninfo_unexecuted_blocks=1 00:08:11.891 00:08:11.891 ' 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:11.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.891 --rc genhtml_branch_coverage=1 00:08:11.891 --rc genhtml_function_coverage=1 00:08:11.891 --rc genhtml_legend=1 00:08:11.891 --rc geninfo_all_blocks=1 00:08:11.891 --rc geninfo_unexecuted_blocks=1 00:08:11.891 00:08:11.891 ' 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:11.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.891 --rc genhtml_branch_coverage=1 00:08:11.891 --rc genhtml_function_coverage=1 00:08:11.891 --rc genhtml_legend=1 00:08:11.891 --rc geninfo_all_blocks=1 00:08:11.891 --rc geninfo_unexecuted_blocks=1 00:08:11.891 00:08:11.891 ' 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.891 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:11.892 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:11.892 Cannot find device "nvmf_init_br" 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:11.892 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:12.151 Cannot find device "nvmf_init_br2" 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:12.151 Cannot find device "nvmf_tgt_br" 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:12.151 Cannot find device "nvmf_tgt_br2" 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:12.151 Cannot find device "nvmf_init_br" 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:12.151 Cannot find device "nvmf_init_br2" 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:12.151 Cannot find device "nvmf_tgt_br" 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:12.151 Cannot find device "nvmf_tgt_br2" 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:12.151 Cannot find device "nvmf_br" 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:12.151 Cannot find device "nvmf_init_if" 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:12.151 Cannot find device "nvmf_init_if2" 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:12.151 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:12.151 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:12.151 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:12.152 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:12.152 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:12.152 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:12.152 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:12.411 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:12.411 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:08:12.411 00:08:12.411 --- 10.0.0.3 ping statistics --- 00:08:12.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.411 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:12.411 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:12.411 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:08:12.411 00:08:12.411 --- 10.0.0.4 ping statistics --- 00:08:12.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.411 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:12.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:08:12.411 00:08:12.411 --- 10.0.0.1 ping statistics --- 00:08:12.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.411 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:12.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:08:12.411 00:08:12.411 --- 10.0.0.2 ping statistics --- 00:08:12.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.411 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=68358 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 68358 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 68358 ']' 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.411 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:12.411 [2024-11-19 17:07:04.833953] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:08:12.411 [2024-11-19 17:07:04.834075] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.670 [2024-11-19 17:07:04.990868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:12.670 [2024-11-19 17:07:05.065003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.670 [2024-11-19 17:07:05.065069] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.670 [2024-11-19 17:07:05.065088] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:12.670 [2024-11-19 17:07:05.065107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:12.670 [2024-11-19 17:07:05.065117] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.670 [2024-11-19 17:07:05.066664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.670 [2024-11-19 17:07:05.066873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.670 [2024-11-19 17:07:05.067781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.670 [2024-11-19 17:07:05.067804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.606 17:07:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.606 17:07:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:08:13.606 17:07:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:13.606 17:07:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:13.606 17:07:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:13.606 17:07:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.606 17:07:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:13.606 [2024-11-19 17:07:06.113791] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.864 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:14.122 Malloc0 00:08:14.122 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:14.381 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:14.639 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:14.898 [2024-11-19 17:07:07.261820] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:14.898 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:08:15.156 [2024-11-19 17:07:07.518211] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:08:15.156 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:15.415 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:08:15.673 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:15.673 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:08:15.673 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:15.673 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:15.673 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=68502 00:08:17.576 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:17.576 17:07:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:17.576 [global] 00:08:17.576 thread=1 00:08:17.576 invalidate=1 00:08:17.576 rw=randrw 00:08:17.576 time_based=1 00:08:17.576 runtime=6 00:08:17.576 ioengine=libaio 00:08:17.576 direct=1 00:08:17.576 bs=4096 00:08:17.576 iodepth=128 00:08:17.576 norandommap=0 00:08:17.576 numjobs=1 00:08:17.576 00:08:17.576 verify_dump=1 00:08:17.576 verify_backlog=512 00:08:17.576 verify_state_save=0 00:08:17.576 do_verify=1 00:08:17.576 verify=crc32c-intel 00:08:17.576 [job0] 00:08:17.576 filename=/dev/nvme0n1 00:08:17.576 Could not set queue depth (nvme0n1) 00:08:17.835 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:17.835 fio-3.35 00:08:17.835 Starting 1 thread 00:08:18.773 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:19.032 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:19.290 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:19.290 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:19.290 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:19.290 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:19.290 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:19.290 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:19.290 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:19.290 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:19.290 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:19.290 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:19.290 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:19.290 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:19.290 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:08:20.227 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:08:20.227 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:20.227 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:20.227 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:20.486 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:20.746 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:20.746 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:20.746 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:20.746 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:20.746 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:20.746 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:20.746 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:20.746 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:20.746 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:20.746 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:20.746 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:20.746 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:20.746 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:08:22.126 17:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:08:22.126 17:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:22.126 17:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:22.126 17:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 68502 00:08:24.027 00:08:24.027 job0: (groupid=0, jobs=1): err= 0: pid=68523: Tue Nov 19 17:07:16 2024 00:08:24.027 read: IOPS=10.7k, BW=41.9MiB/s (44.0MB/s)(252MiB/6004msec) 00:08:24.027 slat (usec): min=3, max=5942, avg=53.94, stdev=245.71 00:08:24.027 clat (usec): min=2434, max=15145, avg=8166.56, stdev=1228.33 00:08:24.027 lat (usec): min=2444, max=15156, avg=8220.50, stdev=1240.00 00:08:24.027 clat percentiles (usec): 00:08:24.027 | 1.00th=[ 5014], 5.00th=[ 6325], 10.00th=[ 6915], 20.00th=[ 7308], 00:08:24.027 | 30.00th=[ 7635], 40.00th=[ 7898], 50.00th=[ 8094], 60.00th=[ 8291], 00:08:24.027 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9503], 95.00th=[10159], 00:08:24.027 | 99.00th=[12125], 99.50th=[12649], 99.90th=[13698], 99.95th=[14091], 00:08:24.027 | 99.99th=[14353] 00:08:24.027 bw ( KiB/s): min=11408, max=28688, per=51.45%, avg=22099.09, stdev=6484.23, samples=11 00:08:24.027 iops : min= 2852, max= 7172, avg=5524.73, stdev=1621.03, samples=11 00:08:24.027 write: IOPS=6396, BW=25.0MiB/s (26.2MB/s)(131MiB/5244msec); 0 zone resets 00:08:24.027 slat (usec): min=10, max=2498, avg=65.36, stdev=169.84 00:08:24.027 clat (usec): min=614, max=14939, avg=7000.34, stdev=1012.72 00:08:24.027 lat (usec): min=658, max=14962, avg=7065.70, stdev=1018.02 00:08:24.027 clat percentiles (usec): 00:08:24.027 | 1.00th=[ 4015], 5.00th=[ 5276], 10.00th=[ 5932], 20.00th=[ 6390], 00:08:24.027 | 30.00th=[ 6652], 40.00th=[ 6849], 50.00th=[ 7046], 60.00th=[ 7242], 00:08:24.027 | 70.00th=[ 7439], 80.00th=[ 7635], 90.00th=[ 7963], 95.00th=[ 8356], 00:08:24.027 | 99.00th=[10159], 99.50th=[10945], 99.90th=[12780], 99.95th=[12911], 00:08:24.027 | 99.99th=[13566] 00:08:24.027 bw ( KiB/s): min=12048, max=27944, per=86.72%, avg=22189.82, stdev=6049.84, samples=11 00:08:24.027 iops : min= 3012, max= 6986, avg=5547.36, stdev=1512.40, samples=11 00:08:24.027 lat (usec) : 750=0.01% 00:08:24.027 lat (msec) : 2=0.01%, 4=0.43%, 10=95.20%, 20=4.36% 00:08:24.027 cpu : usr=5.66%, sys=21.09%, ctx=6194, majf=0, minf=114 00:08:24.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:24.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:24.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:24.027 issued rwts: total=64469,33544,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:24.027 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:24.027 00:08:24.027 Run status group 0 (all jobs): 00:08:24.027 READ: bw=41.9MiB/s (44.0MB/s), 41.9MiB/s-41.9MiB/s (44.0MB/s-44.0MB/s), io=252MiB (264MB), run=6004-6004msec 00:08:24.027 WRITE: bw=25.0MiB/s (26.2MB/s), 25.0MiB/s-25.0MiB/s (26.2MB/s-26.2MB/s), io=131MiB (137MB), run=5244-5244msec 00:08:24.027 00:08:24.027 Disk stats (read/write): 00:08:24.027 nvme0n1: ios=63431/33030, merge=0/0, ticks=486229/217034, in_queue=703263, util=98.70% 00:08:24.027 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:24.286 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:08:24.545 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:24.545 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:24.545 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:24.545 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:24.545 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:24.545 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:24.545 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:24.545 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:24.545 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:24.545 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:24.545 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:24.545 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:08:24.545 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:08:25.482 17:07:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:08:25.482 17:07:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:25.482 17:07:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:25.482 17:07:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:25.482 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=68660 00:08:25.482 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:25.482 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:25.741 [global] 00:08:25.741 thread=1 00:08:25.741 invalidate=1 00:08:25.741 rw=randrw 00:08:25.741 time_based=1 00:08:25.741 runtime=6 00:08:25.741 ioengine=libaio 00:08:25.741 direct=1 00:08:25.741 bs=4096 00:08:25.741 iodepth=128 00:08:25.741 norandommap=0 00:08:25.741 numjobs=1 00:08:25.741 00:08:25.741 verify_dump=1 00:08:25.741 verify_backlog=512 00:08:25.741 verify_state_save=0 00:08:25.741 do_verify=1 00:08:25.741 verify=crc32c-intel 00:08:25.741 [job0] 00:08:25.741 filename=/dev/nvme0n1 00:08:25.741 Could not set queue depth (nvme0n1) 00:08:25.741 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:25.741 fio-3.35 00:08:25.741 Starting 1 thread 00:08:26.677 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:26.935 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:27.193 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:27.194 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:27.194 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:27.194 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:27.194 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:27.194 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:27.194 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:27.194 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:27.194 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:27.194 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:27.194 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:27.194 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:27.194 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:08:28.620 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:08:28.620 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:28.620 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:28.620 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:28.620 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:28.880 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:28.880 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:28.880 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:28.880 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:28.880 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:28.880 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:28.880 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:28.880 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:28.880 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:28.880 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:28.880 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:28.880 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:28.880 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:08:29.838 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:08:29.838 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:29.838 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:29.838 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 68660 00:08:32.367 00:08:32.367 job0: (groupid=0, jobs=1): err= 0: pid=68681: Tue Nov 19 17:07:24 2024 00:08:32.367 read: IOPS=11.3k, BW=44.3MiB/s (46.4MB/s)(266MiB/6006msec) 00:08:32.367 slat (nsec): min=1989, max=5651.2k, avg=44423.37, stdev=220375.51 00:08:32.367 clat (usec): min=485, max=20918, avg=7801.10, stdev=1792.43 00:08:32.367 lat (usec): min=508, max=20927, avg=7845.53, stdev=1806.96 00:08:32.367 clat percentiles (usec): 00:08:32.368 | 1.00th=[ 3064], 5.00th=[ 4752], 10.00th=[ 5407], 20.00th=[ 6521], 00:08:32.368 | 30.00th=[ 7242], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8160], 00:08:32.368 | 70.00th=[ 8455], 80.00th=[ 8979], 90.00th=[ 9765], 95.00th=[10683], 00:08:32.368 | 99.00th=[12649], 99.50th=[13566], 99.90th=[16057], 99.95th=[18220], 00:08:32.368 | 99.99th=[19530] 00:08:32.368 bw ( KiB/s): min= 4712, max=37080, per=52.88%, avg=23975.82, stdev=9907.12, samples=11 00:08:32.368 iops : min= 1178, max= 9270, avg=5993.91, stdev=2476.74, samples=11 00:08:32.368 write: IOPS=6863, BW=26.8MiB/s (28.1MB/s)(141MiB/5252msec); 0 zone resets 00:08:32.368 slat (usec): min=3, max=3505, avg=53.40, stdev=139.86 00:08:32.368 clat (usec): min=233, max=15903, avg=6421.34, stdev=1673.94 00:08:32.368 lat (usec): min=303, max=15936, avg=6474.74, stdev=1685.89 00:08:32.368 clat percentiles (usec): 00:08:32.368 | 1.00th=[ 2737], 5.00th=[ 3523], 10.00th=[ 3982], 20.00th=[ 4752], 00:08:32.368 | 30.00th=[ 5669], 40.00th=[ 6390], 50.00th=[ 6783], 60.00th=[ 7111], 00:08:32.368 | 70.00th=[ 7373], 80.00th=[ 7701], 90.00th=[ 8160], 95.00th=[ 8586], 00:08:32.368 | 99.00th=[10683], 99.50th=[11600], 99.90th=[13042], 99.95th=[13566], 00:08:32.368 | 99.99th=[15270] 00:08:32.368 bw ( KiB/s): min= 5032, max=36240, per=87.47%, avg=24015.64, stdev=9662.90, samples=11 00:08:32.368 iops : min= 1258, max= 9060, avg=6003.91, stdev=2415.73, samples=11 00:08:32.368 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.02% 00:08:32.368 lat (msec) : 2=0.22%, 4=4.58%, 10=89.37%, 20=5.79%, 50=0.01% 00:08:32.368 cpu : usr=5.61%, sys=23.11%, ctx=6800, majf=0, minf=114 00:08:32.368 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:32.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:32.368 issued rwts: total=68080,36047,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:32.368 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:32.368 00:08:32.368 Run status group 0 (all jobs): 00:08:32.368 READ: bw=44.3MiB/s (46.4MB/s), 44.3MiB/s-44.3MiB/s (46.4MB/s-46.4MB/s), io=266MiB (279MB), run=6006-6006msec 00:08:32.368 WRITE: bw=26.8MiB/s (28.1MB/s), 26.8MiB/s-26.8MiB/s (28.1MB/s-28.1MB/s), io=141MiB (148MB), run=5252-5252msec 00:08:32.368 00:08:32.368 Disk stats (read/write): 00:08:32.368 nvme0n1: ios=67164/35421, merge=0/0, ticks=492132/211295, in_queue=703427, util=98.63% 00:08:32.368 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:32.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:32.368 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:32.368 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:08:32.368 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:32.368 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:32.368 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:32.368 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:32.368 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:08:32.368 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:32.368 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:08:32.368 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:08:32.368 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:08:32.368 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:08:32.368 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:32.368 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:32.368 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:32.368 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:32.368 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:32.368 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:32.368 rmmod nvme_tcp 00:08:32.627 rmmod nvme_fabrics 00:08:32.627 rmmod nvme_keyring 00:08:32.627 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:32.627 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:32.627 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:32.627 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 68358 ']' 00:08:32.627 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 68358 00:08:32.627 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 68358 ']' 00:08:32.627 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 68358 00:08:32.627 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:08:32.627 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:32.627 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68358 00:08:32.627 killing process with pid 68358 00:08:32.627 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:32.627 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:32.627 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68358' 00:08:32.627 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 68358 00:08:32.627 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 68358 00:08:32.886 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:32.886 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:32.886 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:32.886 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:32.886 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:32.886 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:32.886 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:32.886 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:32.886 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:32.886 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:32.886 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:32.886 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:32.886 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:32.886 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:32.886 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:32.886 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:32.886 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:32.886 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:33.145 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:33.145 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:33.145 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:33.145 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:33.145 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:33.145 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.145 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.145 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.145 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:08:33.145 00:08:33.145 real 0m21.378s 00:08:33.145 user 1m23.204s 00:08:33.145 sys 0m6.556s 00:08:33.145 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.145 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:33.145 ************************************ 00:08:33.145 END TEST nvmf_target_multipath 00:08:33.145 ************************************ 00:08:33.145 17:07:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:33.145 17:07:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:33.145 17:07:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.145 17:07:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:33.145 ************************************ 00:08:33.145 START TEST nvmf_zcopy 00:08:33.145 ************************************ 00:08:33.145 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:33.145 * Looking for test storage... 00:08:33.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:33.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.405 --rc genhtml_branch_coverage=1 00:08:33.405 --rc genhtml_function_coverage=1 00:08:33.405 --rc genhtml_legend=1 00:08:33.405 --rc geninfo_all_blocks=1 00:08:33.405 --rc geninfo_unexecuted_blocks=1 00:08:33.405 00:08:33.405 ' 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:33.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.405 --rc genhtml_branch_coverage=1 00:08:33.405 --rc genhtml_function_coverage=1 00:08:33.405 --rc genhtml_legend=1 00:08:33.405 --rc geninfo_all_blocks=1 00:08:33.405 --rc geninfo_unexecuted_blocks=1 00:08:33.405 00:08:33.405 ' 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:33.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.405 --rc genhtml_branch_coverage=1 00:08:33.405 --rc genhtml_function_coverage=1 00:08:33.405 --rc genhtml_legend=1 00:08:33.405 --rc geninfo_all_blocks=1 00:08:33.405 --rc geninfo_unexecuted_blocks=1 00:08:33.405 00:08:33.405 ' 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:33.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.405 --rc genhtml_branch_coverage=1 00:08:33.405 --rc genhtml_function_coverage=1 00:08:33.405 --rc genhtml_legend=1 00:08:33.405 --rc geninfo_all_blocks=1 00:08:33.405 --rc geninfo_unexecuted_blocks=1 00:08:33.405 00:08:33.405 ' 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:33.405 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:33.405 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:33.406 Cannot find device "nvmf_init_br" 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:33.406 Cannot find device "nvmf_init_br2" 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:33.406 Cannot find device "nvmf_tgt_br" 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:33.406 Cannot find device "nvmf_tgt_br2" 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:33.406 Cannot find device "nvmf_init_br" 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:33.406 Cannot find device "nvmf_init_br2" 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:33.406 Cannot find device "nvmf_tgt_br" 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:33.406 Cannot find device "nvmf_tgt_br2" 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:33.406 Cannot find device "nvmf_br" 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:33.406 Cannot find device "nvmf_init_if" 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:33.406 Cannot find device "nvmf_init_if2" 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:08:33.406 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:33.406 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:33.665 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:08:33.665 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:33.665 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:33.665 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:08:33.665 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:33.665 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:33.665 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:33.665 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:33.665 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:33.665 17:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:33.665 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:33.665 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:33.665 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:33.665 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:33.665 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:33.665 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:33.665 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:33.665 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:33.665 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:33.665 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:33.665 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:33.665 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:33.665 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:33.665 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:33.665 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:33.666 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:33.666 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:33.666 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:33.666 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:33.666 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:33.666 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:33.666 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:33.666 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:33.666 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:33.666 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:33.666 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:33.666 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:33.666 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:33.666 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:08:33.666 00:08:33.666 --- 10.0.0.3 ping statistics --- 00:08:33.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.666 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:08:33.666 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:33.666 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:33.666 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:08:33.666 00:08:33.666 --- 10.0.0.4 ping statistics --- 00:08:33.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.666 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:08:33.666 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:33.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:33.666 00:08:33.666 --- 10.0.0.1 ping statistics --- 00:08:33.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.666 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:33.666 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:33.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:08:33.666 00:08:33.666 --- 10.0.0.2 ping statistics --- 00:08:33.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.666 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:08:33.666 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.666 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:08:33.666 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:33.666 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.666 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:33.666 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:33.666 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.666 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:33.666 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:33.925 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:33.925 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:33.925 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:33.925 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:33.925 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:33.925 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=69019 00:08:33.925 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 69019 00:08:33.925 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 69019 ']' 00:08:33.925 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.925 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.925 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.925 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.925 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:33.925 [2024-11-19 17:07:26.286225] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:08:33.925 [2024-11-19 17:07:26.286346] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.925 [2024-11-19 17:07:26.438049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.184 [2024-11-19 17:07:26.496908] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.184 [2024-11-19 17:07:26.496976] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.184 [2024-11-19 17:07:26.497001] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.184 [2024-11-19 17:07:26.497012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.184 [2024-11-19 17:07:26.497021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.184 [2024-11-19 17:07:26.497466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.184 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.184 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:34.185 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:34.185 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:34.185 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:34.185 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:34.185 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:34.185 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:34.185 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.185 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:34.185 [2024-11-19 17:07:26.690710] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:34.185 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.185 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:34.185 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.185 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:34.185 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.185 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:34.185 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.185 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:34.185 [2024-11-19 17:07:26.706908] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:34.185 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.185 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:34.185 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.185 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:34.444 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.444 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:34.444 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.444 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:34.444 malloc0 00:08:34.444 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.444 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:34.444 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.444 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:34.444 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.444 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:34.444 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:34.445 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:34.445 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:34.445 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:34.445 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:34.445 { 00:08:34.445 "params": { 00:08:34.445 "name": "Nvme$subsystem", 00:08:34.445 "trtype": "$TEST_TRANSPORT", 00:08:34.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:34.445 "adrfam": "ipv4", 00:08:34.445 "trsvcid": "$NVMF_PORT", 00:08:34.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:34.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:34.445 "hdgst": ${hdgst:-false}, 00:08:34.445 "ddgst": ${ddgst:-false} 00:08:34.445 }, 00:08:34.445 "method": "bdev_nvme_attach_controller" 00:08:34.445 } 00:08:34.445 EOF 00:08:34.445 )") 00:08:34.445 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:34.445 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:34.445 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:34.445 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:34.445 "params": { 00:08:34.445 "name": "Nvme1", 00:08:34.445 "trtype": "tcp", 00:08:34.445 "traddr": "10.0.0.3", 00:08:34.445 "adrfam": "ipv4", 00:08:34.445 "trsvcid": "4420", 00:08:34.445 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:34.445 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:34.445 "hdgst": false, 00:08:34.445 "ddgst": false 00:08:34.445 }, 00:08:34.445 "method": "bdev_nvme_attach_controller" 00:08:34.445 }' 00:08:34.445 [2024-11-19 17:07:26.808798] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:08:34.445 [2024-11-19 17:07:26.808891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69056 ] 00:08:34.445 [2024-11-19 17:07:26.961693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.703 [2024-11-19 17:07:27.043262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.962 Running I/O for 10 seconds... 00:08:36.844 5816.00 IOPS, 45.44 MiB/s [2024-11-19T17:07:30.320Z] 5848.50 IOPS, 45.69 MiB/s [2024-11-19T17:07:31.733Z] 5911.67 IOPS, 46.18 MiB/s [2024-11-19T17:07:32.301Z] 5995.75 IOPS, 46.84 MiB/s [2024-11-19T17:07:33.678Z] 6050.80 IOPS, 47.27 MiB/s [2024-11-19T17:07:34.614Z] 6025.00 IOPS, 47.07 MiB/s [2024-11-19T17:07:35.549Z] 6051.00 IOPS, 47.27 MiB/s [2024-11-19T17:07:36.484Z] 6061.88 IOPS, 47.36 MiB/s [2024-11-19T17:07:37.421Z] 6060.22 IOPS, 47.35 MiB/s [2024-11-19T17:07:37.421Z] 6050.80 IOPS, 47.27 MiB/s 00:08:44.888 Latency(us) 00:08:44.888 [2024-11-19T17:07:37.421Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.888 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:44.888 Verification LBA range: start 0x0 length 0x1000 00:08:44.888 Nvme1n1 : 10.01 6053.02 47.29 0.00 0.00 21082.07 2159.71 31457.28 00:08:44.888 [2024-11-19T17:07:37.421Z] =================================================================================================================== 00:08:44.888 [2024-11-19T17:07:37.421Z] Total : 6053.02 47.29 0.00 0.00 21082.07 2159.71 31457.28 00:08:45.147 17:07:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=69179 00:08:45.147 17:07:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:45.147 17:07:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.147 17:07:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:45.147 17:07:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:45.147 17:07:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:45.147 17:07:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:45.147 17:07:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:45.147 17:07:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:45.147 { 00:08:45.147 "params": { 00:08:45.147 "name": "Nvme$subsystem", 00:08:45.147 "trtype": "$TEST_TRANSPORT", 00:08:45.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:45.148 "adrfam": "ipv4", 00:08:45.148 "trsvcid": "$NVMF_PORT", 00:08:45.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:45.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:45.148 "hdgst": ${hdgst:-false}, 00:08:45.148 "ddgst": ${ddgst:-false} 00:08:45.148 }, 00:08:45.148 "method": "bdev_nvme_attach_controller" 00:08:45.148 } 00:08:45.148 EOF 00:08:45.148 )") 00:08:45.148 17:07:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:45.148 [2024-11-19 17:07:37.548701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.148 [2024-11-19 17:07:37.548770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.148 17:07:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:45.148 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.148 17:07:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:45.148 17:07:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:45.148 "params": { 00:08:45.148 "name": "Nvme1", 00:08:45.148 "trtype": "tcp", 00:08:45.148 "traddr": "10.0.0.3", 00:08:45.148 "adrfam": "ipv4", 00:08:45.148 "trsvcid": "4420", 00:08:45.148 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:45.148 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:45.148 "hdgst": false, 00:08:45.148 "ddgst": false 00:08:45.148 }, 00:08:45.148 "method": "bdev_nvme_attach_controller" 00:08:45.148 }' 00:08:45.148 [2024-11-19 17:07:37.560668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.148 [2024-11-19 17:07:37.560693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.148 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.148 [2024-11-19 17:07:37.572664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.148 [2024-11-19 17:07:37.572688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.148 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.148 [2024-11-19 17:07:37.584664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.148 [2024-11-19 17:07:37.584686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.148 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.148 [2024-11-19 17:07:37.596666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.148 [2024-11-19 17:07:37.596688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.148 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.148 [2024-11-19 17:07:37.608672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.148 [2024-11-19 17:07:37.608696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.148 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.148 [2024-11-19 17:07:37.620675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.148 [2024-11-19 17:07:37.620699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.148 [2024-11-19 17:07:37.623456] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:08:45.148 [2024-11-19 17:07:37.623589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69179 ] 00:08:45.148 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.148 [2024-11-19 17:07:37.632679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.148 [2024-11-19 17:07:37.632701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.148 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.148 [2024-11-19 17:07:37.644681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.148 [2024-11-19 17:07:37.644702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.148 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.148 [2024-11-19 17:07:37.656682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.148 [2024-11-19 17:07:37.656703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.148 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.148 [2024-11-19 17:07:37.668684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.148 [2024-11-19 17:07:37.668705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.148 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.408 [2024-11-19 17:07:37.680695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.408 [2024-11-19 17:07:37.680717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.408 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.408 [2024-11-19 17:07:37.692691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.408 [2024-11-19 17:07:37.692713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.408 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.408 [2024-11-19 17:07:37.704681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.408 [2024-11-19 17:07:37.704703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.408 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.408 [2024-11-19 17:07:37.716695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.408 [2024-11-19 17:07:37.716716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.408 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.408 [2024-11-19 17:07:37.728697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.408 [2024-11-19 17:07:37.728718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.408 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.408 [2024-11-19 17:07:37.740703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.408 [2024-11-19 17:07:37.740727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.408 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.408 [2024-11-19 17:07:37.752706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.408 [2024-11-19 17:07:37.752730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.408 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.408 [2024-11-19 17:07:37.764704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.408 [2024-11-19 17:07:37.764725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.408 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.408 [2024-11-19 17:07:37.772702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.408 [2024-11-19 17:07:37.772722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.408 [2024-11-19 17:07:37.773747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.408 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.408 [2024-11-19 17:07:37.780695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.408 [2024-11-19 17:07:37.780716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.408 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.408 [2024-11-19 17:07:37.788704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.408 [2024-11-19 17:07:37.788725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.408 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.408 [2024-11-19 17:07:37.796692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.408 [2024-11-19 17:07:37.796713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.408 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.408 [2024-11-19 17:07:37.804710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.408 [2024-11-19 17:07:37.804733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.408 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.408 [2024-11-19 17:07:37.812710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.408 [2024-11-19 17:07:37.812731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.408 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.408 [2024-11-19 17:07:37.820711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.408 [2024-11-19 17:07:37.820732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.408 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.408 [2024-11-19 17:07:37.832718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.408 [2024-11-19 17:07:37.832765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.408 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.408 [2024-11-19 17:07:37.840736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.408 [2024-11-19 17:07:37.840798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.408 [2024-11-19 17:07:37.840807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.408 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.408 [2024-11-19 17:07:37.848725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.408 [2024-11-19 17:07:37.848773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.408 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.408 [2024-11-19 17:07:37.856709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.408 [2024-11-19 17:07:37.856730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.408 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.408 [2024-11-19 17:07:37.864724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.408 [2024-11-19 17:07:37.864773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.408 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.408 [2024-11-19 17:07:37.872711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.408 [2024-11-19 17:07:37.872734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.409 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.409 [2024-11-19 17:07:37.880715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.409 [2024-11-19 17:07:37.880762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.409 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.409 [2024-11-19 17:07:37.888779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.409 [2024-11-19 17:07:37.888800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.409 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.409 [2024-11-19 17:07:37.896734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.409 [2024-11-19 17:07:37.896782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.409 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.409 [2024-11-19 17:07:37.904751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.409 [2024-11-19 17:07:37.904777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.409 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.409 [2024-11-19 17:07:37.912731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.409 [2024-11-19 17:07:37.912781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.409 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.409 [2024-11-19 17:07:37.920727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.409 [2024-11-19 17:07:37.920792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.409 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.409 [2024-11-19 17:07:37.928728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.409 [2024-11-19 17:07:37.928776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.409 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.409 [2024-11-19 17:07:37.936747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.409 [2024-11-19 17:07:37.936794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.669 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.669 [2024-11-19 17:07:37.944740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.669 [2024-11-19 17:07:37.944787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.669 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.669 [2024-11-19 17:07:37.952743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.669 [2024-11-19 17:07:37.952806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.669 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.669 [2024-11-19 17:07:37.960764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.669 [2024-11-19 17:07:37.960794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.669 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.669 [2024-11-19 17:07:37.968747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.669 [2024-11-19 17:07:37.968770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.669 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.669 [2024-11-19 17:07:37.976780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.669 [2024-11-19 17:07:37.976800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.669 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.669 [2024-11-19 17:07:37.988790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.669 [2024-11-19 17:07:37.988834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.669 2024/11/19 17:07:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.669 [2024-11-19 17:07:37.996766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.669 [2024-11-19 17:07:37.996808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.669 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.669 [2024-11-19 17:07:38.004795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.669 [2024-11-19 17:07:38.004836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.669 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.669 [2024-11-19 17:07:38.012804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.669 [2024-11-19 17:07:38.012845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.669 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.669 [2024-11-19 17:07:38.020820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.669 [2024-11-19 17:07:38.020863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.669 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.669 [2024-11-19 17:07:38.028777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.669 [2024-11-19 17:07:38.028818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.669 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.669 [2024-11-19 17:07:38.036842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.669 [2024-11-19 17:07:38.036882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.669 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.669 [2024-11-19 17:07:38.044820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.669 [2024-11-19 17:07:38.044865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.669 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.669 [2024-11-19 17:07:38.052812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.669 [2024-11-19 17:07:38.052851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.669 Running I/O for 5 seconds... 00:08:45.669 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.669 [2024-11-19 17:07:38.066978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.669 [2024-11-19 17:07:38.067023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.669 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.669 [2024-11-19 17:07:38.076908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.669 [2024-11-19 17:07:38.076954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.669 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.669 [2024-11-19 17:07:38.092213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.669 [2024-11-19 17:07:38.092261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.670 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.670 [2024-11-19 17:07:38.109241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.670 [2024-11-19 17:07:38.109288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.670 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.670 [2024-11-19 17:07:38.126309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.670 [2024-11-19 17:07:38.126398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.670 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.670 [2024-11-19 17:07:38.143714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.670 [2024-11-19 17:07:38.143773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.670 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.670 [2024-11-19 17:07:38.160451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.670 [2024-11-19 17:07:38.160500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.670 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.670 [2024-11-19 17:07:38.171009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.670 [2024-11-19 17:07:38.171055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.670 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.670 [2024-11-19 17:07:38.185333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.670 [2024-11-19 17:07:38.185379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.670 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.670 [2024-11-19 17:07:38.196716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.670 [2024-11-19 17:07:38.196760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.930 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.930 [2024-11-19 17:07:38.212896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.930 [2024-11-19 17:07:38.212959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.930 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.930 [2024-11-19 17:07:38.229015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.930 [2024-11-19 17:07:38.229063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.930 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.930 [2024-11-19 17:07:38.245738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.930 [2024-11-19 17:07:38.245800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.930 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.930 [2024-11-19 17:07:38.262759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.930 [2024-11-19 17:07:38.262792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.930 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.930 [2024-11-19 17:07:38.279010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.930 [2024-11-19 17:07:38.279059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.930 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.930 [2024-11-19 17:07:38.295346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.930 [2024-11-19 17:07:38.295394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.930 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.930 [2024-11-19 17:07:38.312668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.930 [2024-11-19 17:07:38.312715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.930 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.930 [2024-11-19 17:07:38.323597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.930 [2024-11-19 17:07:38.323645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.930 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.930 [2024-11-19 17:07:38.338454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.930 [2024-11-19 17:07:38.338486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.930 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.930 [2024-11-19 17:07:38.355202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.930 [2024-11-19 17:07:38.355248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.930 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.930 [2024-11-19 17:07:38.371697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.930 [2024-11-19 17:07:38.371743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.930 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.930 [2024-11-19 17:07:38.388217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.930 [2024-11-19 17:07:38.388263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.930 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.930 [2024-11-19 17:07:38.404809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.930 [2024-11-19 17:07:38.404854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.930 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.930 [2024-11-19 17:07:38.421869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.930 [2024-11-19 17:07:38.421930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.930 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.930 [2024-11-19 17:07:38.438924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.930 [2024-11-19 17:07:38.438969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.930 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:45.930 [2024-11-19 17:07:38.454646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.930 [2024-11-19 17:07:38.454726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.930 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.189 [2024-11-19 17:07:38.465379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.189 [2024-11-19 17:07:38.465425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.189 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.189 [2024-11-19 17:07:38.476722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.189 [2024-11-19 17:07:38.476777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.189 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.189 [2024-11-19 17:07:38.491692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.189 [2024-11-19 17:07:38.491721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.189 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.189 [2024-11-19 17:07:38.508460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.189 [2024-11-19 17:07:38.508505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.189 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.189 [2024-11-19 17:07:38.523622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.189 [2024-11-19 17:07:38.523669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.189 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.190 [2024-11-19 17:07:38.539576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.190 [2024-11-19 17:07:38.539622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.190 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.190 [2024-11-19 17:07:38.559244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.190 [2024-11-19 17:07:38.559301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.190 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.190 [2024-11-19 17:07:38.573954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.190 [2024-11-19 17:07:38.574014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.190 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.190 [2024-11-19 17:07:38.583303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.190 [2024-11-19 17:07:38.583348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.190 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.190 [2024-11-19 17:07:38.594502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.190 [2024-11-19 17:07:38.594534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.190 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.190 [2024-11-19 17:07:38.605055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.190 [2024-11-19 17:07:38.605101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.190 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.190 [2024-11-19 17:07:38.617706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.190 [2024-11-19 17:07:38.617763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.190 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.190 [2024-11-19 17:07:38.635557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.190 [2024-11-19 17:07:38.635603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.190 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.190 [2024-11-19 17:07:38.649602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.190 [2024-11-19 17:07:38.649648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.190 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.190 [2024-11-19 17:07:38.666885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.190 [2024-11-19 17:07:38.666942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.190 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.190 [2024-11-19 17:07:38.680616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.190 [2024-11-19 17:07:38.680663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.190 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.190 [2024-11-19 17:07:38.690496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.190 [2024-11-19 17:07:38.690528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.190 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.190 [2024-11-19 17:07:38.703030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.190 [2024-11-19 17:07:38.703075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.190 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.190 [2024-11-19 17:07:38.714085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.190 [2024-11-19 17:07:38.714130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.190 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.449 [2024-11-19 17:07:38.731756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.449 [2024-11-19 17:07:38.731812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.449 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.449 [2024-11-19 17:07:38.747645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.449 [2024-11-19 17:07:38.747689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.449 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.449 [2024-11-19 17:07:38.765165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.449 [2024-11-19 17:07:38.765210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.449 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.449 [2024-11-19 17:07:38.780700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.449 [2024-11-19 17:07:38.780748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.449 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.450 [2024-11-19 17:07:38.796994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.450 [2024-11-19 17:07:38.797039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.450 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.450 [2024-11-19 17:07:38.807421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.450 [2024-11-19 17:07:38.807451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.450 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.450 [2024-11-19 17:07:38.818413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.450 [2024-11-19 17:07:38.818460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.450 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.450 [2024-11-19 17:07:38.835714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.450 [2024-11-19 17:07:38.835773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.450 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.450 [2024-11-19 17:07:38.852449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.450 [2024-11-19 17:07:38.852494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.450 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.450 [2024-11-19 17:07:38.868214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.450 [2024-11-19 17:07:38.868243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.450 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.450 [2024-11-19 17:07:38.878080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.450 [2024-11-19 17:07:38.878126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.450 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.450 [2024-11-19 17:07:38.889554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.450 [2024-11-19 17:07:38.889599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.450 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.450 [2024-11-19 17:07:38.906134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.450 [2024-11-19 17:07:38.906179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.450 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.450 [2024-11-19 17:07:38.924764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.450 [2024-11-19 17:07:38.924820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.450 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.450 [2024-11-19 17:07:38.939809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.450 [2024-11-19 17:07:38.939864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.450 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.450 [2024-11-19 17:07:38.956834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.450 [2024-11-19 17:07:38.956921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.450 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.450 [2024-11-19 17:07:38.974399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.450 [2024-11-19 17:07:38.974432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.450 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.710 [2024-11-19 17:07:38.991039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.710 [2024-11-19 17:07:38.991084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.710 2024/11/19 17:07:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.710 [2024-11-19 17:07:39.008869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.710 [2024-11-19 17:07:39.008914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.710 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.710 [2024-11-19 17:07:39.019850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.710 [2024-11-19 17:07:39.019894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.710 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.710 [2024-11-19 17:07:39.030751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.710 [2024-11-19 17:07:39.030825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.710 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.710 [2024-11-19 17:07:39.042033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.710 [2024-11-19 17:07:39.042078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.710 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.710 11566.00 IOPS, 90.36 MiB/s [2024-11-19T17:07:39.243Z] [2024-11-19 17:07:39.059914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.710 [2024-11-19 17:07:39.059959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.710 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.710 [2024-11-19 17:07:39.074936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.710 [2024-11-19 17:07:39.074981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.710 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.710 [2024-11-19 17:07:39.090956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.710 [2024-11-19 17:07:39.091002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.710 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.710 [2024-11-19 17:07:39.107403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.710 [2024-11-19 17:07:39.107449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.710 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.710 [2024-11-19 17:07:39.117335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.710 [2024-11-19 17:07:39.117364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.710 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.710 [2024-11-19 17:07:39.132003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.710 [2024-11-19 17:07:39.132048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.710 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.710 [2024-11-19 17:07:39.142405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.710 [2024-11-19 17:07:39.142452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.710 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.710 [2024-11-19 17:07:39.156348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.710 [2024-11-19 17:07:39.156408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.710 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.710 [2024-11-19 17:07:39.166484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.710 [2024-11-19 17:07:39.166516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.710 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.710 [2024-11-19 17:07:39.181378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.710 [2024-11-19 17:07:39.181424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.710 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.710 [2024-11-19 17:07:39.192129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.710 [2024-11-19 17:07:39.192179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.710 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.710 [2024-11-19 17:07:39.206146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.710 [2024-11-19 17:07:39.206190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.710 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.710 [2024-11-19 17:07:39.221403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.710 [2024-11-19 17:07:39.221449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.710 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.710 [2024-11-19 17:07:39.230848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.710 [2024-11-19 17:07:39.230892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.710 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.970 [2024-11-19 17:07:39.242312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.970 [2024-11-19 17:07:39.242399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.970 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.970 [2024-11-19 17:07:39.252836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.970 [2024-11-19 17:07:39.252878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.970 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.970 [2024-11-19 17:07:39.263550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.970 [2024-11-19 17:07:39.263595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.970 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.970 [2024-11-19 17:07:39.280065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.970 [2024-11-19 17:07:39.280098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.970 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.970 [2024-11-19 17:07:39.298996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.970 [2024-11-19 17:07:39.299028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.970 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.970 [2024-11-19 17:07:39.314669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.970 [2024-11-19 17:07:39.314702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.970 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.970 [2024-11-19 17:07:39.331778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.970 [2024-11-19 17:07:39.331810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.970 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.970 [2024-11-19 17:07:39.342515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.970 [2024-11-19 17:07:39.342549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.970 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.970 [2024-11-19 17:07:39.357531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.970 [2024-11-19 17:07:39.357570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.970 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.970 [2024-11-19 17:07:39.368176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.970 [2024-11-19 17:07:39.368213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.970 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.970 [2024-11-19 17:07:39.382341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.970 [2024-11-19 17:07:39.382377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.970 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.970 [2024-11-19 17:07:39.393386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.970 [2024-11-19 17:07:39.393436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.970 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.970 [2024-11-19 17:07:39.409270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.970 [2024-11-19 17:07:39.409323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.970 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.970 [2024-11-19 17:07:39.420514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.970 [2024-11-19 17:07:39.420552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.970 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.970 [2024-11-19 17:07:39.436337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.970 [2024-11-19 17:07:39.436388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.971 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.971 [2024-11-19 17:07:39.453355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.971 [2024-11-19 17:07:39.453423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.971 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.971 [2024-11-19 17:07:39.470407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.971 [2024-11-19 17:07:39.470445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.971 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:46.971 [2024-11-19 17:07:39.487582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.971 [2024-11-19 17:07:39.487632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.971 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.230 [2024-11-19 17:07:39.503672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.230 [2024-11-19 17:07:39.503725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.230 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.230 [2024-11-19 17:07:39.515059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.230 [2024-11-19 17:07:39.515095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.230 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.230 [2024-11-19 17:07:39.530380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.230 [2024-11-19 17:07:39.530416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.230 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.230 [2024-11-19 17:07:39.541561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.230 [2024-11-19 17:07:39.541612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.230 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.230 [2024-11-19 17:07:39.557138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.230 [2024-11-19 17:07:39.557175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.230 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.230 [2024-11-19 17:07:39.567734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.230 [2024-11-19 17:07:39.567798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.230 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.230 [2024-11-19 17:07:39.584303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.230 [2024-11-19 17:07:39.584342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.230 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.230 [2024-11-19 17:07:39.595956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.230 [2024-11-19 17:07:39.596000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.230 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.230 [2024-11-19 17:07:39.607716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.230 [2024-11-19 17:07:39.607780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.230 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.230 [2024-11-19 17:07:39.619558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.230 [2024-11-19 17:07:39.619606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.230 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.230 [2024-11-19 17:07:39.635270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.230 [2024-11-19 17:07:39.635324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.230 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.230 [2024-11-19 17:07:39.652049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.230 [2024-11-19 17:07:39.652089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.230 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.230 [2024-11-19 17:07:39.668631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.230 [2024-11-19 17:07:39.668683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.230 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.230 [2024-11-19 17:07:39.685845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.230 [2024-11-19 17:07:39.685906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.230 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.230 [2024-11-19 17:07:39.702562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.230 [2024-11-19 17:07:39.702599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.230 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.230 [2024-11-19 17:07:39.718938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.230 [2024-11-19 17:07:39.718990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.230 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.230 [2024-11-19 17:07:39.729974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.230 [2024-11-19 17:07:39.730025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.230 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.230 [2024-11-19 17:07:39.745846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.230 [2024-11-19 17:07:39.745895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.230 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.231 [2024-11-19 17:07:39.756826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.231 [2024-11-19 17:07:39.756886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.490 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.490 [2024-11-19 17:07:39.768723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.490 [2024-11-19 17:07:39.768805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.490 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.490 [2024-11-19 17:07:39.781021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.490 [2024-11-19 17:07:39.781055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.490 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.490 [2024-11-19 17:07:39.796900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.490 [2024-11-19 17:07:39.796935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.490 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.490 [2024-11-19 17:07:39.813088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.490 [2024-11-19 17:07:39.813142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.490 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.490 [2024-11-19 17:07:39.831279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.490 [2024-11-19 17:07:39.831331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.490 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.490 [2024-11-19 17:07:39.847561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.490 [2024-11-19 17:07:39.847615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.490 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.490 [2024-11-19 17:07:39.858425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.490 [2024-11-19 17:07:39.858460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.490 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.490 [2024-11-19 17:07:39.873675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.490 [2024-11-19 17:07:39.873727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.490 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.490 [2024-11-19 17:07:39.885231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.490 [2024-11-19 17:07:39.885283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.490 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.490 [2024-11-19 17:07:39.896800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.490 [2024-11-19 17:07:39.896860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.490 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.490 [2024-11-19 17:07:39.908348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.490 [2024-11-19 17:07:39.908399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.490 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.490 [2024-11-19 17:07:39.924633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.490 [2024-11-19 17:07:39.924684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.490 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.490 [2024-11-19 17:07:39.941386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.490 [2024-11-19 17:07:39.941437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.490 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.490 [2024-11-19 17:07:39.957608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.490 [2024-11-19 17:07:39.957657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.490 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.490 [2024-11-19 17:07:39.974096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.490 [2024-11-19 17:07:39.974146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.490 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.490 [2024-11-19 17:07:39.990670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.490 [2024-11-19 17:07:39.990723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.491 2024/11/19 17:07:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.491 [2024-11-19 17:07:40.006574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.491 [2024-11-19 17:07:40.006611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.491 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.491 [2024-11-19 17:07:40.016975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.491 [2024-11-19 17:07:40.017024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.750 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.750 [2024-11-19 17:07:40.032625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.750 [2024-11-19 17:07:40.032679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.750 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.750 [2024-11-19 17:07:40.048986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.750 [2024-11-19 17:07:40.049039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.750 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.750 11333.50 IOPS, 88.54 MiB/s [2024-11-19T17:07:40.283Z] [2024-11-19 17:07:40.066080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.750 [2024-11-19 17:07:40.066134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.750 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.750 [2024-11-19 17:07:40.082444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.750 [2024-11-19 17:07:40.082482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.750 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.750 [2024-11-19 17:07:40.099048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.750 [2024-11-19 17:07:40.099099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.750 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.750 [2024-11-19 17:07:40.114489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.750 [2024-11-19 17:07:40.114537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.750 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.750 [2024-11-19 17:07:40.130275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.750 [2024-11-19 17:07:40.130353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.750 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.750 [2024-11-19 17:07:40.147256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.750 [2024-11-19 17:07:40.147310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.750 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.750 [2024-11-19 17:07:40.162441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.750 [2024-11-19 17:07:40.162479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.750 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.750 [2024-11-19 17:07:40.180636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.750 [2024-11-19 17:07:40.180685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.750 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.750 [2024-11-19 17:07:40.191493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.750 [2024-11-19 17:07:40.191541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.750 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.750 [2024-11-19 17:07:40.204718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.750 [2024-11-19 17:07:40.204782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.750 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.750 [2024-11-19 17:07:40.215802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.750 [2024-11-19 17:07:40.215848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.750 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.750 [2024-11-19 17:07:40.230556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.750 [2024-11-19 17:07:40.230594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.750 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.750 [2024-11-19 17:07:40.241485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.750 [2024-11-19 17:07:40.241533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.750 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.750 [2024-11-19 17:07:40.256228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.750 [2024-11-19 17:07:40.256276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.750 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:47.750 [2024-11-19 17:07:40.267601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.750 [2024-11-19 17:07:40.267649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.750 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.010 [2024-11-19 17:07:40.284035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.010 [2024-11-19 17:07:40.284086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.010 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.010 [2024-11-19 17:07:40.299923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.010 [2024-11-19 17:07:40.299959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.011 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.011 [2024-11-19 17:07:40.316926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.011 [2024-11-19 17:07:40.316979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.011 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.011 [2024-11-19 17:07:40.331826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.011 [2024-11-19 17:07:40.331891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.011 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.011 [2024-11-19 17:07:40.349877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.011 [2024-11-19 17:07:40.349927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.011 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.011 [2024-11-19 17:07:40.365617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.011 [2024-11-19 17:07:40.365667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.011 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.011 [2024-11-19 17:07:40.382148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.011 [2024-11-19 17:07:40.382199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.011 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.011 [2024-11-19 17:07:40.398837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.011 [2024-11-19 17:07:40.398950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.011 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.011 [2024-11-19 17:07:40.414461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.011 [2024-11-19 17:07:40.414584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.011 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.011 [2024-11-19 17:07:40.431621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.011 [2024-11-19 17:07:40.431734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.011 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.011 [2024-11-19 17:07:40.442464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.011 [2024-11-19 17:07:40.442588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.011 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.011 [2024-11-19 17:07:40.457794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.011 [2024-11-19 17:07:40.458040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.011 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.011 [2024-11-19 17:07:40.473486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.011 [2024-11-19 17:07:40.473574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.011 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.011 [2024-11-19 17:07:40.489088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.011 [2024-11-19 17:07:40.489190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.011 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.011 [2024-11-19 17:07:40.506883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.011 [2024-11-19 17:07:40.506983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.011 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.011 [2024-11-19 17:07:40.521452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.011 [2024-11-19 17:07:40.521695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.011 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.011 [2024-11-19 17:07:40.538707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.011 [2024-11-19 17:07:40.539011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.271 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.271 [2024-11-19 17:07:40.555238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.271 [2024-11-19 17:07:40.555343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.271 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.271 [2024-11-19 17:07:40.565569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.271 [2024-11-19 17:07:40.565671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.271 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.271 [2024-11-19 17:07:40.580563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.271 [2024-11-19 17:07:40.581084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.271 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.271 [2024-11-19 17:07:40.596407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.271 [2024-11-19 17:07:40.596649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.271 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.272 [2024-11-19 17:07:40.610982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.272 [2024-11-19 17:07:40.611290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.272 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.272 [2024-11-19 17:07:40.626115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.272 [2024-11-19 17:07:40.626411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.272 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.272 [2024-11-19 17:07:40.641152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.272 [2024-11-19 17:07:40.641260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.272 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.272 [2024-11-19 17:07:40.656978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.272 [2024-11-19 17:07:40.657082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.272 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.272 [2024-11-19 17:07:40.667460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.272 [2024-11-19 17:07:40.667624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.272 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.272 [2024-11-19 17:07:40.682419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.272 [2024-11-19 17:07:40.682635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.272 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.272 [2024-11-19 17:07:40.693026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.272 [2024-11-19 17:07:40.693309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.272 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.272 [2024-11-19 17:07:40.708126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.272 [2024-11-19 17:07:40.708397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.272 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.272 [2024-11-19 17:07:40.718953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.272 [2024-11-19 17:07:40.719236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.272 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.272 [2024-11-19 17:07:40.733752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.272 [2024-11-19 17:07:40.734025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.272 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.272 [2024-11-19 17:07:40.743815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.272 [2024-11-19 17:07:40.744076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.272 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.272 [2024-11-19 17:07:40.758161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.272 [2024-11-19 17:07:40.758269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.272 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.272 [2024-11-19 17:07:40.775507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.272 [2024-11-19 17:07:40.775535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.272 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.272 [2024-11-19 17:07:40.790442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.272 [2024-11-19 17:07:40.790667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.272 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.531 [2024-11-19 17:07:40.808587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.531 [2024-11-19 17:07:40.808864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.531 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.531 [2024-11-19 17:07:40.823673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.531 [2024-11-19 17:07:40.823990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.531 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.531 [2024-11-19 17:07:40.834115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.531 [2024-11-19 17:07:40.834439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.531 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.531 [2024-11-19 17:07:40.848939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.531 [2024-11-19 17:07:40.849217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.531 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.531 [2024-11-19 17:07:40.865063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.531 [2024-11-19 17:07:40.865349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.531 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.531 [2024-11-19 17:07:40.876121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.531 [2024-11-19 17:07:40.876225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.531 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.532 [2024-11-19 17:07:40.890925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.532 [2024-11-19 17:07:40.891224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.532 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.532 [2024-11-19 17:07:40.907238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.532 [2024-11-19 17:07:40.907342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.532 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.532 [2024-11-19 17:07:40.923585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.532 [2024-11-19 17:07:40.923847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.532 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.532 [2024-11-19 17:07:40.940002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.532 [2024-11-19 17:07:40.940255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.532 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.532 [2024-11-19 17:07:40.957185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.532 [2024-11-19 17:07:40.957490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.532 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.532 [2024-11-19 17:07:40.974029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.532 [2024-11-19 17:07:40.974200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.532 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.532 [2024-11-19 17:07:40.991190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.532 [2024-11-19 17:07:40.991278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.532 2024/11/19 17:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.532 [2024-11-19 17:07:41.006872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.532 [2024-11-19 17:07:41.007054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.532 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.532 [2024-11-19 17:07:41.017434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.532 [2024-11-19 17:07:41.017703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.532 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.532 [2024-11-19 17:07:41.033074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.532 [2024-11-19 17:07:41.033340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.532 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.532 [2024-11-19 17:07:41.044649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.532 [2024-11-19 17:07:41.044966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.532 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.532 11330.33 IOPS, 88.52 MiB/s [2024-11-19T17:07:41.065Z] [2024-11-19 17:07:41.060905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.532 [2024-11-19 17:07:41.061121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.791 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.791 [2024-11-19 17:07:41.077440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.791 [2024-11-19 17:07:41.077708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.791 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.791 [2024-11-19 17:07:41.095103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.791 [2024-11-19 17:07:41.095368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.791 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.791 [2024-11-19 17:07:41.111858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.791 [2024-11-19 17:07:41.111987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.791 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.791 [2024-11-19 17:07:41.128881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.791 [2024-11-19 17:07:41.129016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.792 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.792 [2024-11-19 17:07:41.146294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.792 [2024-11-19 17:07:41.146450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.792 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.792 [2024-11-19 17:07:41.161167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.792 [2024-11-19 17:07:41.161273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.792 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.792 [2024-11-19 17:07:41.170448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.792 [2024-11-19 17:07:41.170685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.792 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.792 [2024-11-19 17:07:41.186611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.792 [2024-11-19 17:07:41.186900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.792 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.792 [2024-11-19 17:07:41.198094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.792 [2024-11-19 17:07:41.198400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.792 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.792 [2024-11-19 17:07:41.209825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.792 [2024-11-19 17:07:41.210106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.792 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.792 [2024-11-19 17:07:41.221285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.792 [2024-11-19 17:07:41.221477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.792 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.792 [2024-11-19 17:07:41.236627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.792 [2024-11-19 17:07:41.236831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.792 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.792 [2024-11-19 17:07:41.254169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.792 [2024-11-19 17:07:41.254407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.792 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.792 [2024-11-19 17:07:41.269518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.792 [2024-11-19 17:07:41.269554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.792 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.792 [2024-11-19 17:07:41.279676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.792 [2024-11-19 17:07:41.279711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.792 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.792 [2024-11-19 17:07:41.294865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.792 [2024-11-19 17:07:41.295057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.792 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:48.792 [2024-11-19 17:07:41.304848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.792 [2024-11-19 17:07:41.304881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.792 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.051 [2024-11-19 17:07:41.321117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.051 [2024-11-19 17:07:41.321155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.051 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.051 [2024-11-19 17:07:41.337171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.051 [2024-11-19 17:07:41.337209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.051 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.051 [2024-11-19 17:07:41.353916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.051 [2024-11-19 17:07:41.354097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.051 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.051 [2024-11-19 17:07:41.364758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.051 [2024-11-19 17:07:41.364963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.051 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.051 [2024-11-19 17:07:41.380475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.051 [2024-11-19 17:07:41.380666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.051 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.051 [2024-11-19 17:07:41.397596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.051 [2024-11-19 17:07:41.397796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.051 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.051 [2024-11-19 17:07:41.413611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.051 [2024-11-19 17:07:41.413814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.051 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.051 [2024-11-19 17:07:41.424693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.051 [2024-11-19 17:07:41.424898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.051 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.051 [2024-11-19 17:07:41.439744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.051 [2024-11-19 17:07:41.439945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.051 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.051 [2024-11-19 17:07:41.450448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.051 [2024-11-19 17:07:41.450598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.051 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.051 [2024-11-19 17:07:41.465777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.051 [2024-11-19 17:07:41.465961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.051 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.051 [2024-11-19 17:07:41.481954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.051 [2024-11-19 17:07:41.482110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.051 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.051 [2024-11-19 17:07:41.499378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.051 [2024-11-19 17:07:41.499552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.051 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.051 [2024-11-19 17:07:41.515971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.051 [2024-11-19 17:07:41.516120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.051 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.051 [2024-11-19 17:07:41.532636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.051 [2024-11-19 17:07:41.532790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.051 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.051 [2024-11-19 17:07:41.549362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.051 [2024-11-19 17:07:41.549398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.051 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.051 [2024-11-19 17:07:41.560362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.051 [2024-11-19 17:07:41.560397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.051 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.051 [2024-11-19 17:07:41.571899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.051 [2024-11-19 17:07:41.572051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.051 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.310 [2024-11-19 17:07:41.588198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.310 [2024-11-19 17:07:41.588416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.310 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.310 [2024-11-19 17:07:41.599176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.310 [2024-11-19 17:07:41.599364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.310 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.310 [2024-11-19 17:07:41.610828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.310 [2024-11-19 17:07:41.611001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.310 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.310 [2024-11-19 17:07:41.626804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.310 [2024-11-19 17:07:41.627019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.310 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.310 [2024-11-19 17:07:41.644392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.310 [2024-11-19 17:07:41.644582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.310 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.310 [2024-11-19 17:07:41.660547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.311 [2024-11-19 17:07:41.660748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.311 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.311 [2024-11-19 17:07:41.676611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.311 [2024-11-19 17:07:41.676812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.311 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.311 [2024-11-19 17:07:41.693436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.311 [2024-11-19 17:07:41.693646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.311 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.311 [2024-11-19 17:07:41.704042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.311 [2024-11-19 17:07:41.704250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.311 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.311 [2024-11-19 17:07:41.718997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.311 [2024-11-19 17:07:41.719202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.311 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.311 [2024-11-19 17:07:41.730147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.311 [2024-11-19 17:07:41.730196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.311 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.311 [2024-11-19 17:07:41.741027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.311 [2024-11-19 17:07:41.741061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.311 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.311 [2024-11-19 17:07:41.756022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.311 [2024-11-19 17:07:41.756059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.311 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.311 [2024-11-19 17:07:41.772331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.311 [2024-11-19 17:07:41.772365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.311 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.311 [2024-11-19 17:07:41.788815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.311 [2024-11-19 17:07:41.789024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.311 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.311 [2024-11-19 17:07:41.805474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.311 [2024-11-19 17:07:41.805662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.311 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.311 [2024-11-19 17:07:41.821841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.311 [2024-11-19 17:07:41.822028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.311 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.311 [2024-11-19 17:07:41.838552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.311 [2024-11-19 17:07:41.838702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.569 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.569 [2024-11-19 17:07:41.849795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.569 [2024-11-19 17:07:41.849981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.569 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.569 [2024-11-19 17:07:41.864515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.569 [2024-11-19 17:07:41.864708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.569 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.569 [2024-11-19 17:07:41.875318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.569 [2024-11-19 17:07:41.875504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.569 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.569 [2024-11-19 17:07:41.890219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.569 [2024-11-19 17:07:41.890266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.569 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.569 [2024-11-19 17:07:41.906537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.569 [2024-11-19 17:07:41.906759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.569 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.569 [2024-11-19 17:07:41.917356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.569 [2024-11-19 17:07:41.917391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.569 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.569 [2024-11-19 17:07:41.931697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.569 [2024-11-19 17:07:41.931900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.569 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.569 [2024-11-19 17:07:41.942219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.569 [2024-11-19 17:07:41.942426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.569 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.569 [2024-11-19 17:07:41.956922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.569 [2024-11-19 17:07:41.957102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.569 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.569 [2024-11-19 17:07:41.967692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.569 [2024-11-19 17:07:41.967897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.569 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.569 [2024-11-19 17:07:41.983150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.569 [2024-11-19 17:07:41.983337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.569 2024/11/19 17:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.569 [2024-11-19 17:07:42.000219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.569 [2024-11-19 17:07:42.000436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.569 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.569 [2024-11-19 17:07:42.011300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.569 [2024-11-19 17:07:42.011353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.569 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.569 [2024-11-19 17:07:42.023305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.569 [2024-11-19 17:07:42.023371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.569 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.569 [2024-11-19 17:07:42.038572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.569 [2024-11-19 17:07:42.038609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.569 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.569 [2024-11-19 17:07:42.054694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.569 [2024-11-19 17:07:42.054925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.569 11281.00 IOPS, 88.13 MiB/s [2024-11-19T17:07:42.102Z] 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.569 [2024-11-19 17:07:42.071163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.569 [2024-11-19 17:07:42.071363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.569 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.569 [2024-11-19 17:07:42.086518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.569 [2024-11-19 17:07:42.086670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.569 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.827 [2024-11-19 17:07:42.103210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.827 [2024-11-19 17:07:42.103402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.827 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.827 [2024-11-19 17:07:42.118182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.827 [2024-11-19 17:07:42.118408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.827 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.827 [2024-11-19 17:07:42.137017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.827 [2024-11-19 17:07:42.137054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.827 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.827 [2024-11-19 17:07:42.152318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.827 [2024-11-19 17:07:42.152358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.827 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.827 [2024-11-19 17:07:42.169586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.827 [2024-11-19 17:07:42.169627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.827 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.827 [2024-11-19 17:07:42.185200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.827 [2024-11-19 17:07:42.185410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.827 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.827 [2024-11-19 17:07:42.195947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.827 [2024-11-19 17:07:42.196111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.827 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.827 [2024-11-19 17:07:42.210848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.827 [2024-11-19 17:07:42.211021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.827 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.827 [2024-11-19 17:07:42.222135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.827 [2024-11-19 17:07:42.222366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.827 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.827 [2024-11-19 17:07:42.237321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.827 [2024-11-19 17:07:42.237528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.827 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.827 [2024-11-19 17:07:42.253531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.827 [2024-11-19 17:07:42.253752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.827 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.827 [2024-11-19 17:07:42.264621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.827 [2024-11-19 17:07:42.264826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.827 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.827 [2024-11-19 17:07:42.276268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.827 [2024-11-19 17:07:42.276456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.827 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.828 [2024-11-19 17:07:42.287624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.828 [2024-11-19 17:07:42.287667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.828 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.828 [2024-11-19 17:07:42.301154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.828 [2024-11-19 17:07:42.301190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.828 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.828 [2024-11-19 17:07:42.311393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.828 [2024-11-19 17:07:42.311427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.828 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.828 [2024-11-19 17:07:42.326769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.828 [2024-11-19 17:07:42.327003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.828 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.828 [2024-11-19 17:07:42.336811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.828 [2024-11-19 17:07:42.336981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.828 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:49.828 [2024-11-19 17:07:42.351501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.828 [2024-11-19 17:07:42.351689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.828 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.120 [2024-11-19 17:07:42.363310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.120 [2024-11-19 17:07:42.363496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.120 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.120 [2024-11-19 17:07:42.378967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.120 [2024-11-19 17:07:42.379161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.120 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.120 [2024-11-19 17:07:42.390660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.121 [2024-11-19 17:07:42.390730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.121 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.121 [2024-11-19 17:07:42.406156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.121 [2024-11-19 17:07:42.406209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.121 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.121 [2024-11-19 17:07:42.423522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.121 [2024-11-19 17:07:42.423559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.121 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.121 [2024-11-19 17:07:42.433948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.121 [2024-11-19 17:07:42.434169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.121 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.121 [2024-11-19 17:07:42.449158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.121 [2024-11-19 17:07:42.449376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.121 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.121 [2024-11-19 17:07:42.460824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.121 [2024-11-19 17:07:42.461045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.121 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.121 [2024-11-19 17:07:42.477907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.121 [2024-11-19 17:07:42.478074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.121 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.121 [2024-11-19 17:07:42.488003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.121 [2024-11-19 17:07:42.488183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.121 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.121 [2024-11-19 17:07:42.500352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.121 [2024-11-19 17:07:42.500539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.121 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.121 [2024-11-19 17:07:42.512165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.121 [2024-11-19 17:07:42.512387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.121 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.121 [2024-11-19 17:07:42.527963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.121 [2024-11-19 17:07:42.528114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.121 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.121 [2024-11-19 17:07:42.545561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.121 [2024-11-19 17:07:42.545597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.121 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.121 [2024-11-19 17:07:42.562284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.121 [2024-11-19 17:07:42.562330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.121 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.121 [2024-11-19 17:07:42.577973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.121 [2024-11-19 17:07:42.578129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.121 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.121 [2024-11-19 17:07:42.589264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.121 [2024-11-19 17:07:42.589412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.121 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.121 [2024-11-19 17:07:42.605037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.121 [2024-11-19 17:07:42.605256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.121 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.121 [2024-11-19 17:07:42.621227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.121 [2024-11-19 17:07:42.621379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.391 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.391 [2024-11-19 17:07:42.632279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.391 [2024-11-19 17:07:42.632484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.391 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.391 [2024-11-19 17:07:42.644045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.391 [2024-11-19 17:07:42.644206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.391 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.391 [2024-11-19 17:07:42.659125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.391 [2024-11-19 17:07:42.659274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.391 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.392 [2024-11-19 17:07:42.675613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.392 [2024-11-19 17:07:42.675817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.392 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.392 [2024-11-19 17:07:42.693543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.392 [2024-11-19 17:07:42.693727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.392 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.392 [2024-11-19 17:07:42.709452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.392 [2024-11-19 17:07:42.709640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.392 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.392 [2024-11-19 17:07:42.720933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.392 [2024-11-19 17:07:42.721081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.392 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.392 [2024-11-19 17:07:42.736224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.392 [2024-11-19 17:07:42.736429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.392 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.392 [2024-11-19 17:07:42.751929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.392 [2024-11-19 17:07:42.751964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.392 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.392 [2024-11-19 17:07:42.769209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.392 [2024-11-19 17:07:42.769244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.392 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.392 [2024-11-19 17:07:42.784206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.392 [2024-11-19 17:07:42.784400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.392 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.392 [2024-11-19 17:07:42.801002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.392 [2024-11-19 17:07:42.801193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.392 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.392 [2024-11-19 17:07:42.817178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.392 [2024-11-19 17:07:42.817381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.392 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.392 [2024-11-19 17:07:42.827987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.392 [2024-11-19 17:07:42.828140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.392 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.392 [2024-11-19 17:07:42.839480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.392 [2024-11-19 17:07:42.839672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.392 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.392 [2024-11-19 17:07:42.855218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.392 [2024-11-19 17:07:42.855370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.392 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.392 [2024-11-19 17:07:42.866269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.392 [2024-11-19 17:07:42.866452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.392 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.392 [2024-11-19 17:07:42.881669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.392 [2024-11-19 17:07:42.881835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.392 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.392 [2024-11-19 17:07:42.892225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.392 [2024-11-19 17:07:42.892276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.392 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.392 [2024-11-19 17:07:42.907767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.392 [2024-11-19 17:07:42.907835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.392 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.392 [2024-11-19 17:07:42.918316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.392 [2024-11-19 17:07:42.918379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.652 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.652 [2024-11-19 17:07:42.933665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.652 [2024-11-19 17:07:42.933891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.652 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.652 [2024-11-19 17:07:42.950306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.652 [2024-11-19 17:07:42.950484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.652 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.652 [2024-11-19 17:07:42.966938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.652 [2024-11-19 17:07:42.967123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.652 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.652 [2024-11-19 17:07:42.984866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.652 [2024-11-19 17:07:42.985039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.652 2024/11/19 17:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.652 [2024-11-19 17:07:42.999466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.652 [2024-11-19 17:07:42.999651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.652 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.652 [2024-11-19 17:07:43.009193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.652 [2024-11-19 17:07:43.009228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.652 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.652 [2024-11-19 17:07:43.025578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.652 [2024-11-19 17:07:43.025633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.652 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.652 [2024-11-19 17:07:43.036886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.652 [2024-11-19 17:07:43.037062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.652 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.652 [2024-11-19 17:07:43.048494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.652 [2024-11-19 17:07:43.048648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.652 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.652 11238.40 IOPS, 87.80 MiB/s [2024-11-19T17:07:43.185Z] [2024-11-19 17:07:43.059342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.652 [2024-11-19 17:07:43.059547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.652 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.652 00:08:50.652 Latency(us) 00:08:50.652 [2024-11-19T17:07:43.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.652 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:50.652 Nvme1n1 : 5.01 11242.34 87.83 0.00 0.00 11371.04 4438.57 19541.64 00:08:50.652 [2024-11-19T17:07:43.185Z] =================================================================================================================== 00:08:50.652 [2024-11-19T17:07:43.185Z] Total : 11242.34 87.83 0.00 0.00 11371.04 4438.57 19541.64 00:08:50.652 [2024-11-19 17:07:43.071243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.652 [2024-11-19 17:07:43.071418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.652 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.652 [2024-11-19 17:07:43.083273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.652 [2024-11-19 17:07:43.083440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.652 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.652 [2024-11-19 17:07:43.095261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.652 [2024-11-19 17:07:43.095427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.652 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.652 [2024-11-19 17:07:43.103241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.652 [2024-11-19 17:07:43.103267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.652 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.652 [2024-11-19 17:07:43.115230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.652 [2024-11-19 17:07:43.115258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.652 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.652 [2024-11-19 17:07:43.123241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.652 [2024-11-19 17:07:43.123267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.652 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.652 [2024-11-19 17:07:43.135278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.652 [2024-11-19 17:07:43.135437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.652 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.652 [2024-11-19 17:07:43.143270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.652 [2024-11-19 17:07:43.143432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.652 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.652 [2024-11-19 17:07:43.155272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.652 [2024-11-19 17:07:43.155449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.652 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.652 [2024-11-19 17:07:43.167272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.652 [2024-11-19 17:07:43.167436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.652 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.652 [2024-11-19 17:07:43.175269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.652 [2024-11-19 17:07:43.175443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.652 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.912 [2024-11-19 17:07:43.187274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.912 [2024-11-19 17:07:43.187303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.912 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.912 [2024-11-19 17:07:43.195272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.912 [2024-11-19 17:07:43.195297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.912 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.912 [2024-11-19 17:07:43.207277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.912 [2024-11-19 17:07:43.207312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.912 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.912 [2024-11-19 17:07:43.215295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.912 [2024-11-19 17:07:43.215322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.912 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.912 [2024-11-19 17:07:43.227297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.912 [2024-11-19 17:07:43.229337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.912 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.912 [2024-11-19 17:07:43.235291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.912 [2024-11-19 17:07:43.235508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.912 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.912 [2024-11-19 17:07:43.243291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.912 [2024-11-19 17:07:43.243438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.912 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.912 [2024-11-19 17:07:43.251287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.912 [2024-11-19 17:07:43.251462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.912 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.912 [2024-11-19 17:07:43.259339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.912 [2024-11-19 17:07:43.259371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.912 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.912 [2024-11-19 17:07:43.267329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.912 [2024-11-19 17:07:43.267356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.912 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.912 [2024-11-19 17:07:43.275309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.912 [2024-11-19 17:07:43.275335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.912 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.912 [2024-11-19 17:07:43.283291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.912 [2024-11-19 17:07:43.283469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.912 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.912 [2024-11-19 17:07:43.291313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.912 [2024-11-19 17:07:43.291472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.912 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.912 [2024-11-19 17:07:43.299320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.912 [2024-11-19 17:07:43.299481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.913 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.913 [2024-11-19 17:07:43.307352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.913 [2024-11-19 17:07:43.307511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.913 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.913 [2024-11-19 17:07:43.315326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.913 [2024-11-19 17:07:43.315502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.913 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.913 [2024-11-19 17:07:43.323329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.913 [2024-11-19 17:07:43.323501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.913 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.913 [2024-11-19 17:07:43.331344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.913 [2024-11-19 17:07:43.331514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.913 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.913 [2024-11-19 17:07:43.339349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.913 [2024-11-19 17:07:43.339395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.913 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.913 [2024-11-19 17:07:43.347335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.913 [2024-11-19 17:07:43.347362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.913 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.913 [2024-11-19 17:07:43.355353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.913 [2024-11-19 17:07:43.355505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.913 2024/11/19 17:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:50.913 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (69179) - No such process 00:08:50.913 17:07:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 69179 00:08:50.913 17:07:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.913 17:07:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.913 17:07:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.913 17:07:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.913 17:07:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:50.913 17:07:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.913 17:07:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.913 delay0 00:08:50.913 17:07:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.913 17:07:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:50.913 17:07:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.913 17:07:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.913 17:07:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.913 17:07:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:08:51.172 [2024-11-19 17:07:43.576355] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:57.735 Initializing NVMe Controllers 00:08:57.735 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:08:57.735 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:57.735 Initialization complete. Launching workers. 00:08:57.735 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 109 00:08:57.735 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 396, failed to submit 33 00:08:57.735 success 213, unsuccessful 183, failed 0 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:57.735 rmmod nvme_tcp 00:08:57.735 rmmod nvme_fabrics 00:08:57.735 rmmod nvme_keyring 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 69019 ']' 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 69019 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 69019 ']' 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 69019 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69019 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:57.735 killing process with pid 69019 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69019' 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 69019 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 69019 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:57.735 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:57.735 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:57.735 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:57.735 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:57.735 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:57.735 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:57.735 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:57.735 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:57.735 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:57.735 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:57.735 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:57.735 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:57.735 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:57.735 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:57.735 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.735 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.735 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.735 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:08:57.735 00:08:57.735 real 0m24.638s 00:08:57.735 user 0m38.835s 00:08:57.735 sys 0m7.309s 00:08:57.735 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.735 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:57.735 ************************************ 00:08:57.735 END TEST nvmf_zcopy 00:08:57.735 ************************************ 00:08:57.995 17:07:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:57.996 ************************************ 00:08:57.996 START TEST nvmf_nmic 00:08:57.996 ************************************ 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:57.996 * Looking for test storage... 00:08:57.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:57.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.996 --rc genhtml_branch_coverage=1 00:08:57.996 --rc genhtml_function_coverage=1 00:08:57.996 --rc genhtml_legend=1 00:08:57.996 --rc geninfo_all_blocks=1 00:08:57.996 --rc geninfo_unexecuted_blocks=1 00:08:57.996 00:08:57.996 ' 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:57.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.996 --rc genhtml_branch_coverage=1 00:08:57.996 --rc genhtml_function_coverage=1 00:08:57.996 --rc genhtml_legend=1 00:08:57.996 --rc geninfo_all_blocks=1 00:08:57.996 --rc geninfo_unexecuted_blocks=1 00:08:57.996 00:08:57.996 ' 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:57.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.996 --rc genhtml_branch_coverage=1 00:08:57.996 --rc genhtml_function_coverage=1 00:08:57.996 --rc genhtml_legend=1 00:08:57.996 --rc geninfo_all_blocks=1 00:08:57.996 --rc geninfo_unexecuted_blocks=1 00:08:57.996 00:08:57.996 ' 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:57.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.996 --rc genhtml_branch_coverage=1 00:08:57.996 --rc genhtml_function_coverage=1 00:08:57.996 --rc genhtml_legend=1 00:08:57.996 --rc geninfo_all_blocks=1 00:08:57.996 --rc geninfo_unexecuted_blocks=1 00:08:57.996 00:08:57.996 ' 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.996 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:57.997 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:57.997 Cannot find device "nvmf_init_br" 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:08:57.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:58.256 Cannot find device "nvmf_init_br2" 00:08:58.256 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:08:58.256 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:58.256 Cannot find device "nvmf_tgt_br" 00:08:58.256 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:08:58.256 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:58.256 Cannot find device "nvmf_tgt_br2" 00:08:58.256 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:08:58.256 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:58.256 Cannot find device "nvmf_init_br" 00:08:58.256 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:08:58.256 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:58.256 Cannot find device "nvmf_init_br2" 00:08:58.256 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:08:58.256 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:58.256 Cannot find device "nvmf_tgt_br" 00:08:58.256 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:08:58.256 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:58.256 Cannot find device "nvmf_tgt_br2" 00:08:58.256 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:58.257 Cannot find device "nvmf_br" 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:58.257 Cannot find device "nvmf_init_if" 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:58.257 Cannot find device "nvmf_init_if2" 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:58.257 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:58.257 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:58.257 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:58.517 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:58.517 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:08:58.517 00:08:58.517 --- 10.0.0.3 ping statistics --- 00:08:58.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.517 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:58.517 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:58.517 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:08:58.517 00:08:58.517 --- 10.0.0.4 ping statistics --- 00:08:58.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.517 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:58.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:58.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:08:58.517 00:08:58.517 --- 10.0.0.1 ping statistics --- 00:08:58.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.517 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:58.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:58.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:08:58.517 00:08:58.517 --- 10.0.0.2 ping statistics --- 00:08:58.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.517 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=69554 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 69554 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 69554 ']' 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:58.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:58.517 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:58.517 [2024-11-19 17:07:50.967319] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:08:58.517 [2024-11-19 17:07:50.967394] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.776 [2024-11-19 17:07:51.111242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:58.776 [2024-11-19 17:07:51.180653] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:58.776 [2024-11-19 17:07:51.180726] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:58.776 [2024-11-19 17:07:51.180753] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:58.776 [2024-11-19 17:07:51.180764] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:58.776 [2024-11-19 17:07:51.180771] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:58.776 [2024-11-19 17:07:51.182136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.776 [2024-11-19 17:07:51.182276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:58.776 [2024-11-19 17:07:51.182378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:58.776 [2024-11-19 17:07:51.182383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.035 [2024-11-19 17:07:51.390830] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.035 Malloc0 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.035 [2024-11-19 17:07:51.458660] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.035 test case1: single bdev can't be used in multiple subsystems 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.035 [2024-11-19 17:07:51.482501] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:59.035 [2024-11-19 17:07:51.482541] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:59.035 [2024-11-19 17:07:51.482554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.035 2024/11/19 17:07:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:59.035 request: 00:08:59.035 { 00:08:59.035 "method": "nvmf_subsystem_add_ns", 00:08:59.035 "params": { 00:08:59.035 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:59.035 "namespace": { 00:08:59.035 "bdev_name": "Malloc0", 00:08:59.035 "no_auto_visible": false 00:08:59.035 } 00:08:59.035 } 00:08:59.035 } 00:08:59.035 Got JSON-RPC error response 00:08:59.035 GoRPCClient: error on JSON-RPC call 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:59.035 Adding namespace failed - expected result. 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:59.035 test case2: host connect to nvmf target in multiple paths 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.035 [2024-11-19 17:07:51.494611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.035 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:08:59.294 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:08:59.553 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:59.553 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:59.553 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:59.553 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:59.553 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:01.455 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:01.455 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:01.455 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:01.455 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:01.455 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:01.455 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:01.455 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:01.455 [global] 00:09:01.455 thread=1 00:09:01.455 invalidate=1 00:09:01.455 rw=write 00:09:01.455 time_based=1 00:09:01.455 runtime=1 00:09:01.455 ioengine=libaio 00:09:01.455 direct=1 00:09:01.455 bs=4096 00:09:01.455 iodepth=1 00:09:01.455 norandommap=0 00:09:01.455 numjobs=1 00:09:01.455 00:09:01.455 verify_dump=1 00:09:01.455 verify_backlog=512 00:09:01.455 verify_state_save=0 00:09:01.455 do_verify=1 00:09:01.455 verify=crc32c-intel 00:09:01.455 [job0] 00:09:01.455 filename=/dev/nvme0n1 00:09:01.455 Could not set queue depth (nvme0n1) 00:09:01.714 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:01.714 fio-3.35 00:09:01.714 Starting 1 thread 00:09:02.651 00:09:02.651 job0: (groupid=0, jobs=1): err= 0: pid=69650: Tue Nov 19 17:07:55 2024 00:09:02.651 read: IOPS=2637, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec) 00:09:02.651 slat (nsec): min=12361, max=77931, avg=17450.57, stdev=5836.69 00:09:02.651 clat (usec): min=124, max=2183, avg=174.89, stdev=46.07 00:09:02.651 lat (usec): min=136, max=2203, avg=192.34, stdev=46.76 00:09:02.651 clat percentiles (usec): 00:09:02.651 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 155], 00:09:02.651 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 178], 00:09:02.651 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 204], 95.00th=[ 217], 00:09:02.651 | 99.00th=[ 241], 99.50th=[ 247], 99.90th=[ 375], 99.95th=[ 668], 00:09:02.651 | 99.99th=[ 2180] 00:09:02.651 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:02.651 slat (nsec): min=18472, max=97771, avg=26379.49, stdev=8273.19 00:09:02.651 clat (usec): min=90, max=4918, avg=130.44, stdev=188.09 00:09:02.651 lat (usec): min=112, max=4958, avg=156.82, stdev=188.80 00:09:02.651 clat percentiles (usec): 00:09:02.651 | 1.00th=[ 96], 5.00th=[ 100], 10.00th=[ 103], 20.00th=[ 108], 00:09:02.651 | 30.00th=[ 112], 40.00th=[ 115], 50.00th=[ 119], 60.00th=[ 123], 00:09:02.651 | 70.00th=[ 127], 80.00th=[ 135], 90.00th=[ 147], 95.00th=[ 155], 00:09:02.651 | 99.00th=[ 182], 99.50th=[ 208], 99.90th=[ 3687], 99.95th=[ 4883], 00:09:02.651 | 99.99th=[ 4948] 00:09:02.651 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:09:02.651 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:02.651 lat (usec) : 100=2.54%, 250=97.11%, 500=0.18%, 750=0.02%, 1000=0.02% 00:09:02.651 lat (msec) : 4=0.09%, 10=0.05% 00:09:02.651 cpu : usr=3.00%, sys=8.80%, ctx=5715, majf=0, minf=5 00:09:02.651 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:02.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.651 issued rwts: total=2640,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.651 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:02.651 00:09:02.651 Run status group 0 (all jobs): 00:09:02.651 READ: bw=10.3MiB/s (10.8MB/s), 10.3MiB/s-10.3MiB/s (10.8MB/s-10.8MB/s), io=10.3MiB (10.8MB), run=1001-1001msec 00:09:02.652 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:02.652 00:09:02.652 Disk stats (read/write): 00:09:02.652 nvme0n1: ios=2542/2560, merge=0/0, ticks=467/355, in_queue=822, util=90.18% 00:09:02.652 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:02.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:02.911 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:02.911 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:02.911 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:02.911 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:02.911 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:02.911 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:02.911 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:02.911 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:02.911 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:02.911 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:02.911 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:02.911 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:02.911 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:02.911 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:02.911 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:02.911 rmmod nvme_tcp 00:09:02.911 rmmod nvme_fabrics 00:09:02.911 rmmod nvme_keyring 00:09:03.170 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:03.170 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:03.170 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:03.170 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 69554 ']' 00:09:03.170 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 69554 00:09:03.170 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 69554 ']' 00:09:03.170 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 69554 00:09:03.170 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:03.170 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.170 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69554 00:09:03.170 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:03.170 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:03.170 killing process with pid 69554 00:09:03.170 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69554' 00:09:03.170 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 69554 00:09:03.170 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 69554 00:09:03.429 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:03.429 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:03.429 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:03.429 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:03.429 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:03.429 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:03.429 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:03.429 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:03.429 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:03.429 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:03.429 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:03.429 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:03.429 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:03.429 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:03.429 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:03.429 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:03.429 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:03.429 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:03.429 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:03.688 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:03.688 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:03.688 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:03.688 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:03.688 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.688 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.688 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.688 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:09:03.688 00:09:03.688 real 0m5.770s 00:09:03.688 user 0m18.034s 00:09:03.688 sys 0m1.460s 00:09:03.688 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.688 ************************************ 00:09:03.689 END TEST nvmf_nmic 00:09:03.689 ************************************ 00:09:03.689 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:03.689 17:07:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:03.689 17:07:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:03.689 17:07:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.689 17:07:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:03.689 ************************************ 00:09:03.689 START TEST nvmf_fio_target 00:09:03.689 ************************************ 00:09:03.689 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:03.689 * Looking for test storage... 00:09:03.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:03.689 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:03.689 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:03.689 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:03.948 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:03.948 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:03.948 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:03.948 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:03.948 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.948 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:03.948 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:03.948 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:03.948 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:03.948 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:03.948 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:03.948 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:03.948 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:03.948 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:03.948 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:03.948 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.948 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:03.948 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:03.948 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.948 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:03.948 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:03.948 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:03.948 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:03.948 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.948 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:03.948 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:03.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.949 --rc genhtml_branch_coverage=1 00:09:03.949 --rc genhtml_function_coverage=1 00:09:03.949 --rc genhtml_legend=1 00:09:03.949 --rc geninfo_all_blocks=1 00:09:03.949 --rc geninfo_unexecuted_blocks=1 00:09:03.949 00:09:03.949 ' 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:03.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.949 --rc genhtml_branch_coverage=1 00:09:03.949 --rc genhtml_function_coverage=1 00:09:03.949 --rc genhtml_legend=1 00:09:03.949 --rc geninfo_all_blocks=1 00:09:03.949 --rc geninfo_unexecuted_blocks=1 00:09:03.949 00:09:03.949 ' 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:03.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.949 --rc genhtml_branch_coverage=1 00:09:03.949 --rc genhtml_function_coverage=1 00:09:03.949 --rc genhtml_legend=1 00:09:03.949 --rc geninfo_all_blocks=1 00:09:03.949 --rc geninfo_unexecuted_blocks=1 00:09:03.949 00:09:03.949 ' 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:03.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.949 --rc genhtml_branch_coverage=1 00:09:03.949 --rc genhtml_function_coverage=1 00:09:03.949 --rc genhtml_legend=1 00:09:03.949 --rc geninfo_all_blocks=1 00:09:03.949 --rc geninfo_unexecuted_blocks=1 00:09:03.949 00:09:03.949 ' 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:03.949 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:03.949 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:03.950 Cannot find device "nvmf_init_br" 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:03.950 Cannot find device "nvmf_init_br2" 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:03.950 Cannot find device "nvmf_tgt_br" 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:03.950 Cannot find device "nvmf_tgt_br2" 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:03.950 Cannot find device "nvmf_init_br" 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:03.950 Cannot find device "nvmf_init_br2" 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:03.950 Cannot find device "nvmf_tgt_br" 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:03.950 Cannot find device "nvmf_tgt_br2" 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:03.950 Cannot find device "nvmf_br" 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:03.950 Cannot find device "nvmf_init_if" 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:09:03.950 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:04.208 Cannot find device "nvmf_init_if2" 00:09:04.208 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:09:04.208 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:04.208 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:04.208 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:04.209 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:04.209 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:04.468 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:04.468 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:04.468 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:09:04.468 00:09:04.468 --- 10.0.0.3 ping statistics --- 00:09:04.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.468 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:09:04.468 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:04.468 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:04.468 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:09:04.468 00:09:04.468 --- 10.0.0.4 ping statistics --- 00:09:04.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.468 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:09:04.468 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:04.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:04.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:09:04.468 00:09:04.468 --- 10.0.0.1 ping statistics --- 00:09:04.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.468 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:09:04.468 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:04.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:04.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:09:04.468 00:09:04.468 --- 10.0.0.2 ping statistics --- 00:09:04.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.468 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:09:04.468 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:04.468 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:09:04.468 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:04.468 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:04.468 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:04.468 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:04.468 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:04.468 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:04.468 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:04.468 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:04.468 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:04.468 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:04.468 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:04.468 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=69888 00:09:04.468 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:04.468 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 69888 00:09:04.468 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 69888 ']' 00:09:04.468 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.468 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.468 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.468 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.468 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:04.468 [2024-11-19 17:07:56.856645] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:09:04.468 [2024-11-19 17:07:56.856781] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.726 [2024-11-19 17:07:57.005134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:04.726 [2024-11-19 17:07:57.075683] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.726 [2024-11-19 17:07:57.075772] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.726 [2024-11-19 17:07:57.075800] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:04.726 [2024-11-19 17:07:57.075809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:04.726 [2024-11-19 17:07:57.075818] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.726 [2024-11-19 17:07:57.077321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.726 [2024-11-19 17:07:57.077457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.726 [2024-11-19 17:07:57.077569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:04.726 [2024-11-19 17:07:57.077571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.661 17:07:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.661 17:07:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:05.661 17:07:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:05.661 17:07:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:05.661 17:07:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:05.661 17:07:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.661 17:07:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:05.661 [2024-11-19 17:07:58.183407] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.920 17:07:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:06.179 17:07:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:06.179 17:07:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:06.438 17:07:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:06.438 17:07:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:07.005 17:07:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:07.005 17:07:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:07.263 17:07:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:07.263 17:07:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:07.521 17:07:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:07.780 17:08:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:07.780 17:08:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:08.038 17:08:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:08.038 17:08:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:08.297 17:08:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:08.297 17:08:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:08.864 17:08:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:09.123 17:08:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:09.123 17:08:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:09.382 17:08:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:09.382 17:08:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:09.641 17:08:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:09.900 [2024-11-19 17:08:02.254572] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:09.900 17:08:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:10.159 17:08:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:10.418 17:08:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:10.677 17:08:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:10.677 17:08:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:10.677 17:08:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:10.677 17:08:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:10.677 17:08:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:10.677 17:08:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:12.582 17:08:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:12.582 17:08:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:12.582 17:08:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:12.582 17:08:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:12.582 17:08:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:12.582 17:08:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:12.582 17:08:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:12.582 [global] 00:09:12.582 thread=1 00:09:12.582 invalidate=1 00:09:12.582 rw=write 00:09:12.582 time_based=1 00:09:12.582 runtime=1 00:09:12.582 ioengine=libaio 00:09:12.582 direct=1 00:09:12.582 bs=4096 00:09:12.582 iodepth=1 00:09:12.582 norandommap=0 00:09:12.582 numjobs=1 00:09:12.582 00:09:12.582 verify_dump=1 00:09:12.582 verify_backlog=512 00:09:12.582 verify_state_save=0 00:09:12.582 do_verify=1 00:09:12.582 verify=crc32c-intel 00:09:12.582 [job0] 00:09:12.582 filename=/dev/nvme0n1 00:09:12.582 [job1] 00:09:12.582 filename=/dev/nvme0n2 00:09:12.582 [job2] 00:09:12.582 filename=/dev/nvme0n3 00:09:12.582 [job3] 00:09:12.582 filename=/dev/nvme0n4 00:09:12.840 Could not set queue depth (nvme0n1) 00:09:12.840 Could not set queue depth (nvme0n2) 00:09:12.840 Could not set queue depth (nvme0n3) 00:09:12.840 Could not set queue depth (nvme0n4) 00:09:12.840 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.840 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.840 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.840 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.840 fio-3.35 00:09:12.840 Starting 4 threads 00:09:14.215 00:09:14.215 job0: (groupid=0, jobs=1): err= 0: pid=70186: Tue Nov 19 17:08:06 2024 00:09:14.215 read: IOPS=1433, BW=5734KiB/s (5872kB/s)(5740KiB/1001msec) 00:09:14.215 slat (nsec): min=19021, max=57886, avg=25935.50, stdev=4940.76 00:09:14.215 clat (usec): min=161, max=1244, avg=345.28, stdev=54.97 00:09:14.215 lat (usec): min=192, max=1280, avg=371.21, stdev=55.15 00:09:14.215 clat percentiles (usec): 00:09:14.215 | 1.00th=[ 223], 5.00th=[ 289], 10.00th=[ 302], 20.00th=[ 318], 00:09:14.215 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 338], 60.00th=[ 351], 00:09:14.215 | 70.00th=[ 359], 80.00th=[ 371], 90.00th=[ 392], 95.00th=[ 412], 00:09:14.215 | 99.00th=[ 478], 99.50th=[ 498], 99.90th=[ 1090], 99.95th=[ 1237], 00:09:14.215 | 99.99th=[ 1237] 00:09:14.215 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:14.215 slat (usec): min=27, max=117, avg=36.57, stdev= 7.76 00:09:14.215 clat (usec): min=150, max=421, avg=262.09, stdev=31.85 00:09:14.215 lat (usec): min=189, max=456, avg=298.66, stdev=31.15 00:09:14.215 clat percentiles (usec): 00:09:14.215 | 1.00th=[ 200], 5.00th=[ 219], 10.00th=[ 227], 20.00th=[ 237], 00:09:14.215 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 260], 60.00th=[ 265], 00:09:14.215 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 306], 95.00th=[ 322], 00:09:14.215 | 99.00th=[ 355], 99.50th=[ 379], 99.90th=[ 412], 99.95th=[ 420], 00:09:14.215 | 99.99th=[ 420] 00:09:14.215 bw ( KiB/s): min= 8192, max= 8192, per=27.57%, avg=8192.00, stdev= 0.00, samples=1 00:09:14.215 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:14.215 lat (usec) : 250=19.89%, 500=79.87%, 750=0.13%, 1000=0.03% 00:09:14.215 lat (msec) : 2=0.07% 00:09:14.215 cpu : usr=1.50%, sys=7.90%, ctx=2973, majf=0, minf=11 00:09:14.215 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:14.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.215 issued rwts: total=1435,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.215 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:14.215 job1: (groupid=0, jobs=1): err= 0: pid=70188: Tue Nov 19 17:08:06 2024 00:09:14.215 read: IOPS=1422, BW=5690KiB/s (5827kB/s)(5696KiB/1001msec) 00:09:14.215 slat (usec): min=15, max=102, avg=20.90, stdev= 5.55 00:09:14.215 clat (usec): min=175, max=2706, avg=353.64, stdev=80.69 00:09:14.215 lat (usec): min=197, max=2728, avg=374.54, stdev=80.62 00:09:14.215 clat percentiles (usec): 00:09:14.215 | 1.00th=[ 273], 5.00th=[ 297], 10.00th=[ 310], 20.00th=[ 322], 00:09:14.215 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 355], 00:09:14.215 | 70.00th=[ 367], 80.00th=[ 379], 90.00th=[ 400], 95.00th=[ 416], 00:09:14.215 | 99.00th=[ 474], 99.50th=[ 553], 99.90th=[ 1270], 99.95th=[ 2704], 00:09:14.215 | 99.99th=[ 2704] 00:09:14.215 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:14.215 slat (usec): min=26, max=219, avg=36.59, stdev= 8.40 00:09:14.215 clat (usec): min=156, max=579, avg=262.65, stdev=35.86 00:09:14.215 lat (usec): min=203, max=614, avg=299.24, stdev=35.46 00:09:14.215 clat percentiles (usec): 00:09:14.216 | 1.00th=[ 196], 5.00th=[ 217], 10.00th=[ 225], 20.00th=[ 237], 00:09:14.216 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 265], 00:09:14.216 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 306], 95.00th=[ 326], 00:09:14.216 | 99.00th=[ 388], 99.50th=[ 404], 99.90th=[ 457], 99.95th=[ 578], 00:09:14.216 | 99.99th=[ 578] 00:09:14.216 bw ( KiB/s): min= 8184, max= 8184, per=27.54%, avg=8184.00, stdev= 0.00, samples=1 00:09:14.216 iops : min= 2046, max= 2046, avg=2046.00, stdev= 0.00, samples=1 00:09:14.216 lat (usec) : 250=20.30%, 500=79.26%, 750=0.34% 00:09:14.216 lat (msec) : 2=0.07%, 4=0.03% 00:09:14.216 cpu : usr=1.20%, sys=7.00%, ctx=2971, majf=0, minf=7 00:09:14.216 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:14.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.216 issued rwts: total=1424,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.216 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:14.216 job2: (groupid=0, jobs=1): err= 0: pid=70192: Tue Nov 19 17:08:06 2024 00:09:14.216 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:14.216 slat (nsec): min=19363, max=78371, avg=25842.47, stdev=5221.06 00:09:14.216 clat (usec): min=219, max=718, avg=292.90, stdev=28.82 00:09:14.216 lat (usec): min=242, max=743, avg=318.74, stdev=29.12 00:09:14.216 clat percentiles (usec): 00:09:14.216 | 1.00th=[ 243], 5.00th=[ 253], 10.00th=[ 262], 20.00th=[ 269], 00:09:14.216 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 297], 00:09:14.216 | 70.00th=[ 306], 80.00th=[ 318], 90.00th=[ 326], 95.00th=[ 334], 00:09:14.216 | 99.00th=[ 359], 99.50th=[ 367], 99.90th=[ 586], 99.95th=[ 717], 00:09:14.216 | 99.99th=[ 717] 00:09:14.216 write: IOPS=1803, BW=7213KiB/s (7386kB/s)(7220KiB/1001msec); 0 zone resets 00:09:14.216 slat (nsec): min=24789, max=96803, avg=37991.94, stdev=8358.59 00:09:14.216 clat (usec): min=148, max=701, avg=239.82, stdev=28.53 00:09:14.216 lat (usec): min=187, max=743, avg=277.81, stdev=29.74 00:09:14.216 clat percentiles (usec): 00:09:14.216 | 1.00th=[ 186], 5.00th=[ 200], 10.00th=[ 208], 20.00th=[ 219], 00:09:14.216 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 245], 00:09:14.216 | 70.00th=[ 251], 80.00th=[ 262], 90.00th=[ 273], 95.00th=[ 285], 00:09:14.216 | 99.00th=[ 314], 99.50th=[ 326], 99.90th=[ 396], 99.95th=[ 701], 00:09:14.216 | 99.99th=[ 701] 00:09:14.216 bw ( KiB/s): min= 8192, max= 8192, per=27.57%, avg=8192.00, stdev= 0.00, samples=1 00:09:14.216 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:14.216 lat (usec) : 250=38.01%, 500=61.90%, 750=0.09% 00:09:14.216 cpu : usr=2.10%, sys=8.20%, ctx=3342, majf=0, minf=9 00:09:14.216 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:14.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.216 issued rwts: total=1536,1805,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.216 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:14.216 job3: (groupid=0, jobs=1): err= 0: pid=70193: Tue Nov 19 17:08:06 2024 00:09:14.216 read: IOPS=2186, BW=8747KiB/s (8957kB/s)(8756KiB/1001msec) 00:09:14.216 slat (nsec): min=12719, max=52380, avg=15743.14, stdev=4448.13 00:09:14.216 clat (usec): min=156, max=2196, avg=212.90, stdev=52.25 00:09:14.216 lat (usec): min=170, max=2213, avg=228.64, stdev=52.70 00:09:14.216 clat percentiles (usec): 00:09:14.216 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 188], 00:09:14.216 | 30.00th=[ 194], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 217], 00:09:14.216 | 70.00th=[ 223], 80.00th=[ 235], 90.00th=[ 249], 95.00th=[ 265], 00:09:14.216 | 99.00th=[ 297], 99.50th=[ 314], 99.90th=[ 519], 99.95th=[ 668], 00:09:14.216 | 99.99th=[ 2212] 00:09:14.216 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:14.216 slat (nsec): min=18774, max=61418, avg=22185.44, stdev=4380.81 00:09:14.216 clat (usec): min=115, max=1882, avg=169.81, stdev=45.19 00:09:14.216 lat (usec): min=137, max=1916, avg=191.99, stdev=45.80 00:09:14.216 clat percentiles (usec): 00:09:14.216 | 1.00th=[ 124], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 145], 00:09:14.216 | 30.00th=[ 151], 40.00th=[ 157], 50.00th=[ 165], 60.00th=[ 174], 00:09:14.216 | 70.00th=[ 182], 80.00th=[ 194], 90.00th=[ 212], 95.00th=[ 225], 00:09:14.216 | 99.00th=[ 253], 99.50th=[ 269], 99.90th=[ 314], 99.95th=[ 420], 00:09:14.216 | 99.99th=[ 1876] 00:09:14.216 bw ( KiB/s): min=11208, max=11208, per=37.71%, avg=11208.00, stdev= 0.00, samples=1 00:09:14.216 iops : min= 2802, max= 2802, avg=2802.00, stdev= 0.00, samples=1 00:09:14.216 lat (usec) : 250=94.90%, 500=5.01%, 750=0.04% 00:09:14.216 lat (msec) : 2=0.02%, 4=0.02% 00:09:14.216 cpu : usr=1.20%, sys=7.50%, ctx=4750, majf=0, minf=11 00:09:14.216 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:14.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.216 issued rwts: total=2189,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.216 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:14.216 00:09:14.216 Run status group 0 (all jobs): 00:09:14.216 READ: bw=25.7MiB/s (26.9MB/s), 5690KiB/s-8747KiB/s (5827kB/s-8957kB/s), io=25.7MiB (27.0MB), run=1001-1001msec 00:09:14.216 WRITE: bw=29.0MiB/s (30.4MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=29.1MiB (30.5MB), run=1001-1001msec 00:09:14.216 00:09:14.216 Disk stats (read/write): 00:09:14.216 nvme0n1: ios=1102/1536, merge=0/0, ticks=399/433, in_queue=832, util=87.36% 00:09:14.216 nvme0n2: ios=1087/1536, merge=0/0, ticks=409/421, in_queue=830, util=88.70% 00:09:14.216 nvme0n3: ios=1320/1536, merge=0/0, ticks=433/406, in_queue=839, util=89.48% 00:09:14.216 nvme0n4: ios=2012/2048, merge=0/0, ticks=456/368, in_queue=824, util=89.73% 00:09:14.216 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:14.216 [global] 00:09:14.216 thread=1 00:09:14.216 invalidate=1 00:09:14.216 rw=randwrite 00:09:14.216 time_based=1 00:09:14.216 runtime=1 00:09:14.216 ioengine=libaio 00:09:14.216 direct=1 00:09:14.216 bs=4096 00:09:14.216 iodepth=1 00:09:14.216 norandommap=0 00:09:14.216 numjobs=1 00:09:14.216 00:09:14.216 verify_dump=1 00:09:14.216 verify_backlog=512 00:09:14.216 verify_state_save=0 00:09:14.216 do_verify=1 00:09:14.216 verify=crc32c-intel 00:09:14.216 [job0] 00:09:14.216 filename=/dev/nvme0n1 00:09:14.216 [job1] 00:09:14.216 filename=/dev/nvme0n2 00:09:14.216 [job2] 00:09:14.216 filename=/dev/nvme0n3 00:09:14.216 [job3] 00:09:14.216 filename=/dev/nvme0n4 00:09:14.216 Could not set queue depth (nvme0n1) 00:09:14.216 Could not set queue depth (nvme0n2) 00:09:14.216 Could not set queue depth (nvme0n3) 00:09:14.216 Could not set queue depth (nvme0n4) 00:09:14.216 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:14.216 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:14.216 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:14.216 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:14.216 fio-3.35 00:09:14.216 Starting 4 threads 00:09:15.591 00:09:15.591 job0: (groupid=0, jobs=1): err= 0: pid=70247: Tue Nov 19 17:08:07 2024 00:09:15.591 read: IOPS=1087, BW=4352KiB/s (4456kB/s)(4356KiB/1001msec) 00:09:15.591 slat (nsec): min=10286, max=99947, avg=29942.24, stdev=11186.18 00:09:15.591 clat (usec): min=160, max=1260, avg=389.25, stdev=81.30 00:09:15.591 lat (usec): min=179, max=1292, avg=419.20, stdev=80.17 00:09:15.591 clat percentiles (usec): 00:09:15.591 | 1.00th=[ 198], 5.00th=[ 302], 10.00th=[ 318], 20.00th=[ 334], 00:09:15.591 | 30.00th=[ 347], 40.00th=[ 359], 50.00th=[ 375], 60.00th=[ 388], 00:09:15.591 | 70.00th=[ 412], 80.00th=[ 441], 90.00th=[ 478], 95.00th=[ 523], 00:09:15.591 | 99.00th=[ 676], 99.50th=[ 701], 99.90th=[ 750], 99.95th=[ 1254], 00:09:15.591 | 99.99th=[ 1254] 00:09:15.591 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:15.591 slat (usec): min=17, max=118, avg=35.92, stdev=10.38 00:09:15.591 clat (usec): min=167, max=796, avg=312.53, stdev=65.30 00:09:15.591 lat (usec): min=193, max=857, avg=348.45, stdev=61.86 00:09:15.591 clat percentiles (usec): 00:09:15.591 | 1.00th=[ 200], 5.00th=[ 233], 10.00th=[ 245], 20.00th=[ 262], 00:09:15.591 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 314], 00:09:15.591 | 70.00th=[ 334], 80.00th=[ 367], 90.00th=[ 404], 95.00th=[ 441], 00:09:15.591 | 99.00th=[ 490], 99.50th=[ 529], 99.90th=[ 586], 99.95th=[ 799], 00:09:15.591 | 99.99th=[ 799] 00:09:15.591 bw ( KiB/s): min= 6738, max= 6738, per=25.23%, avg=6738.00, stdev= 0.00, samples=1 00:09:15.591 iops : min= 1684, max= 1684, avg=1684.00, stdev= 0.00, samples=1 00:09:15.591 lat (usec) : 250=7.66%, 500=89.10%, 750=3.09%, 1000=0.11% 00:09:15.591 lat (msec) : 2=0.04% 00:09:15.591 cpu : usr=2.20%, sys=6.70%, ctx=2626, majf=0, minf=21 00:09:15.591 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.591 issued rwts: total=1089,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.591 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.591 job1: (groupid=0, jobs=1): err= 0: pid=70248: Tue Nov 19 17:08:07 2024 00:09:15.591 read: IOPS=1771, BW=7085KiB/s (7255kB/s)(7092KiB/1001msec) 00:09:15.591 slat (nsec): min=13196, max=61546, avg=21848.89, stdev=6399.26 00:09:15.591 clat (usec): min=144, max=3229, avg=266.26, stdev=167.94 00:09:15.591 lat (usec): min=161, max=3271, avg=288.11, stdev=169.72 00:09:15.591 clat percentiles (usec): 00:09:15.591 | 1.00th=[ 151], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 180], 00:09:15.591 | 30.00th=[ 190], 40.00th=[ 198], 50.00th=[ 208], 60.00th=[ 221], 00:09:15.591 | 70.00th=[ 239], 80.00th=[ 273], 90.00th=[ 545], 95.00th=[ 685], 00:09:15.591 | 99.00th=[ 750], 99.50th=[ 766], 99.90th=[ 1106], 99.95th=[ 3228], 00:09:15.591 | 99.99th=[ 3228] 00:09:15.591 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:15.591 slat (usec): min=13, max=114, avg=30.57, stdev= 9.94 00:09:15.591 clat (usec): min=102, max=7823, avg=203.72, stdev=238.99 00:09:15.591 lat (usec): min=128, max=7856, avg=234.30, stdev=239.65 00:09:15.591 clat percentiles (usec): 00:09:15.591 | 1.00th=[ 114], 5.00th=[ 123], 10.00th=[ 130], 20.00th=[ 141], 00:09:15.591 | 30.00th=[ 151], 40.00th=[ 159], 50.00th=[ 172], 60.00th=[ 182], 00:09:15.591 | 70.00th=[ 198], 80.00th=[ 221], 90.00th=[ 318], 95.00th=[ 408], 00:09:15.591 | 99.00th=[ 469], 99.50th=[ 494], 99.90th=[ 3163], 99.95th=[ 5669], 00:09:15.591 | 99.99th=[ 7832] 00:09:15.591 bw ( KiB/s): min=10730, max=10730, per=40.18%, avg=10730.00, stdev= 0.00, samples=1 00:09:15.591 iops : min= 2682, max= 2682, avg=2682.00, stdev= 0.00, samples=1 00:09:15.591 lat (usec) : 250=80.14%, 500=14.81%, 750=4.50%, 1000=0.39% 00:09:15.591 lat (msec) : 2=0.03%, 4=0.08%, 10=0.05% 00:09:15.591 cpu : usr=1.70%, sys=8.40%, ctx=3822, majf=0, minf=5 00:09:15.591 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.591 issued rwts: total=1773,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.591 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.591 job2: (groupid=0, jobs=1): err= 0: pid=70249: Tue Nov 19 17:08:07 2024 00:09:15.591 read: IOPS=1055, BW=4224KiB/s (4325kB/s)(4228KiB/1001msec) 00:09:15.591 slat (nsec): min=13250, max=81078, avg=24576.90, stdev=9698.68 00:09:15.591 clat (usec): min=200, max=1627, avg=406.89, stdev=90.86 00:09:15.591 lat (usec): min=224, max=1656, avg=431.46, stdev=95.21 00:09:15.591 clat percentiles (usec): 00:09:15.591 | 1.00th=[ 289], 5.00th=[ 318], 10.00th=[ 330], 20.00th=[ 347], 00:09:15.591 | 30.00th=[ 363], 40.00th=[ 375], 50.00th=[ 388], 60.00th=[ 400], 00:09:15.591 | 70.00th=[ 420], 80.00th=[ 453], 90.00th=[ 498], 95.00th=[ 578], 00:09:15.591 | 99.00th=[ 709], 99.50th=[ 742], 99.90th=[ 1074], 99.95th=[ 1631], 00:09:15.591 | 99.99th=[ 1631] 00:09:15.591 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:15.591 slat (usec): min=21, max=114, avg=36.80, stdev= 8.91 00:09:15.591 clat (usec): min=174, max=537, avg=312.48, stdev=59.66 00:09:15.591 lat (usec): min=221, max=574, avg=349.28, stdev=60.23 00:09:15.591 clat percentiles (usec): 00:09:15.591 | 1.00th=[ 206], 5.00th=[ 227], 10.00th=[ 243], 20.00th=[ 265], 00:09:15.591 | 30.00th=[ 277], 40.00th=[ 289], 50.00th=[ 302], 60.00th=[ 318], 00:09:15.591 | 70.00th=[ 338], 80.00th=[ 371], 90.00th=[ 400], 95.00th=[ 424], 00:09:15.591 | 99.00th=[ 465], 99.50th=[ 486], 99.90th=[ 502], 99.95th=[ 537], 00:09:15.591 | 99.99th=[ 537] 00:09:15.591 bw ( KiB/s): min= 6722, max= 6722, per=25.17%, avg=6722.00, stdev= 0.00, samples=1 00:09:15.591 iops : min= 1680, max= 1680, avg=1680.00, stdev= 0.00, samples=1 00:09:15.591 lat (usec) : 250=7.52%, 500=88.55%, 750=3.74%, 1000=0.12% 00:09:15.591 lat (msec) : 2=0.08% 00:09:15.591 cpu : usr=1.50%, sys=6.50%, ctx=2593, majf=0, minf=15 00:09:15.591 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.592 issued rwts: total=1057,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.592 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.592 job3: (groupid=0, jobs=1): err= 0: pid=70250: Tue Nov 19 17:08:07 2024 00:09:15.592 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:15.592 slat (nsec): min=15471, max=80788, avg=24751.47, stdev=8563.50 00:09:15.592 clat (usec): min=210, max=3525, avg=311.07, stdev=111.91 00:09:15.592 lat (usec): min=229, max=3555, avg=335.82, stdev=113.47 00:09:15.592 clat percentiles (usec): 00:09:15.592 | 1.00th=[ 227], 5.00th=[ 239], 10.00th=[ 247], 20.00th=[ 265], 00:09:15.592 | 30.00th=[ 281], 40.00th=[ 293], 50.00th=[ 310], 60.00th=[ 318], 00:09:15.592 | 70.00th=[ 330], 80.00th=[ 347], 90.00th=[ 367], 95.00th=[ 383], 00:09:15.592 | 99.00th=[ 429], 99.50th=[ 449], 99.90th=[ 2606], 99.95th=[ 3523], 00:09:15.592 | 99.99th=[ 3523] 00:09:15.592 write: IOPS=1561, BW=6246KiB/s (6396kB/s)(6252KiB/1001msec); 0 zone resets 00:09:15.592 slat (usec): min=22, max=124, avg=39.32, stdev=12.69 00:09:15.592 clat (usec): min=148, max=572, avg=264.76, stdev=44.60 00:09:15.592 lat (usec): min=172, max=642, avg=304.08, stdev=48.91 00:09:15.592 clat percentiles (usec): 00:09:15.592 | 1.00th=[ 182], 5.00th=[ 200], 10.00th=[ 215], 20.00th=[ 231], 00:09:15.592 | 30.00th=[ 241], 40.00th=[ 251], 50.00th=[ 262], 60.00th=[ 273], 00:09:15.592 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 318], 95.00th=[ 343], 00:09:15.592 | 99.00th=[ 412], 99.50th=[ 420], 99.90th=[ 469], 99.95th=[ 570], 00:09:15.592 | 99.99th=[ 570] 00:09:15.592 bw ( KiB/s): min= 7848, max= 7848, per=29.39%, avg=7848.00, stdev= 0.00, samples=1 00:09:15.592 iops : min= 1962, max= 1962, avg=1962.00, stdev= 0.00, samples=1 00:09:15.592 lat (usec) : 250=24.43%, 500=75.41%, 750=0.06% 00:09:15.592 lat (msec) : 2=0.03%, 4=0.06% 00:09:15.592 cpu : usr=2.10%, sys=7.50%, ctx=3100, majf=0, minf=5 00:09:15.592 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.592 issued rwts: total=1536,1563,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.592 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.592 00:09:15.592 Run status group 0 (all jobs): 00:09:15.592 READ: bw=21.3MiB/s (22.3MB/s), 4224KiB/s-7085KiB/s (4325kB/s-7255kB/s), io=21.3MiB (22.3MB), run=1001-1001msec 00:09:15.592 WRITE: bw=26.1MiB/s (27.3MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=26.1MiB (27.4MB), run=1001-1001msec 00:09:15.592 00:09:15.592 Disk stats (read/write): 00:09:15.592 nvme0n1: ios=1074/1227, merge=0/0, ticks=433/388, in_queue=821, util=88.16% 00:09:15.592 nvme0n2: ios=1597/2048, merge=0/0, ticks=398/424, in_queue=822, util=88.82% 00:09:15.592 nvme0n3: ios=1051/1201, merge=0/0, ticks=472/378, in_queue=850, util=90.35% 00:09:15.592 nvme0n4: ios=1080/1536, merge=0/0, ticks=415/422, in_queue=837, util=90.51% 00:09:15.592 17:08:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:15.592 [global] 00:09:15.592 thread=1 00:09:15.592 invalidate=1 00:09:15.592 rw=write 00:09:15.592 time_based=1 00:09:15.592 runtime=1 00:09:15.592 ioengine=libaio 00:09:15.592 direct=1 00:09:15.592 bs=4096 00:09:15.592 iodepth=128 00:09:15.592 norandommap=0 00:09:15.592 numjobs=1 00:09:15.592 00:09:15.592 verify_dump=1 00:09:15.592 verify_backlog=512 00:09:15.592 verify_state_save=0 00:09:15.592 do_verify=1 00:09:15.592 verify=crc32c-intel 00:09:15.592 [job0] 00:09:15.592 filename=/dev/nvme0n1 00:09:15.592 [job1] 00:09:15.592 filename=/dev/nvme0n2 00:09:15.592 [job2] 00:09:15.592 filename=/dev/nvme0n3 00:09:15.592 [job3] 00:09:15.592 filename=/dev/nvme0n4 00:09:15.592 Could not set queue depth (nvme0n1) 00:09:15.592 Could not set queue depth (nvme0n2) 00:09:15.592 Could not set queue depth (nvme0n3) 00:09:15.592 Could not set queue depth (nvme0n4) 00:09:15.592 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:15.592 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:15.592 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:15.592 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:15.592 fio-3.35 00:09:15.592 Starting 4 threads 00:09:16.966 00:09:16.966 job0: (groupid=0, jobs=1): err= 0: pid=70311: Tue Nov 19 17:08:09 2024 00:09:16.966 read: IOPS=2286, BW=9147KiB/s (9366kB/s)(9220KiB/1008msec) 00:09:16.966 slat (usec): min=6, max=9587, avg=214.91, stdev=934.74 00:09:16.966 clat (usec): min=3002, max=38058, avg=27945.95, stdev=4226.00 00:09:16.966 lat (usec): min=7844, max=38094, avg=28160.86, stdev=4169.26 00:09:16.966 clat percentiles (usec): 00:09:16.966 | 1.00th=[ 8291], 5.00th=[22938], 10.00th=[25560], 20.00th=[26346], 00:09:16.966 | 30.00th=[26608], 40.00th=[26870], 50.00th=[27395], 60.00th=[27919], 00:09:16.966 | 70.00th=[29230], 80.00th=[30540], 90.00th=[33162], 95.00th=[34341], 00:09:16.966 | 99.00th=[38011], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:09:16.966 | 99.99th=[38011] 00:09:16.966 write: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec); 0 zone resets 00:09:16.966 slat (usec): min=10, max=9640, avg=190.73, stdev=925.29 00:09:16.966 clat (usec): min=14283, max=29160, avg=24369.43, stdev=2339.18 00:09:16.966 lat (usec): min=18546, max=30449, avg=24560.16, stdev=2200.80 00:09:16.966 clat percentiles (usec): 00:09:16.966 | 1.00th=[18482], 5.00th=[19530], 10.00th=[20579], 20.00th=[22414], 00:09:16.966 | 30.00th=[23200], 40.00th=[24511], 50.00th=[25035], 60.00th=[25297], 00:09:16.966 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26870], 95.00th=[27657], 00:09:16.966 | 99.00th=[28181], 99.50th=[28443], 99.90th=[28705], 99.95th=[28705], 00:09:16.966 | 99.99th=[29230] 00:09:16.966 bw ( KiB/s): min=10008, max=10472, per=17.38%, avg=10240.00, stdev=328.10, samples=2 00:09:16.966 iops : min= 2502, max= 2618, avg=2560.00, stdev=82.02, samples=2 00:09:16.966 lat (msec) : 4=0.02%, 10=0.66%, 20=3.72%, 50=95.60% 00:09:16.966 cpu : usr=2.88%, sys=6.95%, ctx=217, majf=0, minf=11 00:09:16.966 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:09:16.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.966 issued rwts: total=2305,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.966 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.966 job1: (groupid=0, jobs=1): err= 0: pid=70312: Tue Nov 19 17:08:09 2024 00:09:16.966 read: IOPS=5050, BW=19.7MiB/s (20.7MB/s)(19.8MiB/1001msec) 00:09:16.966 slat (usec): min=4, max=3040, avg=96.22, stdev=453.32 00:09:16.966 clat (usec): min=391, max=15305, avg=12630.99, stdev=1189.14 00:09:16.966 lat (usec): min=2860, max=15745, avg=12727.21, stdev=1111.63 00:09:16.966 clat percentiles (usec): 00:09:16.966 | 1.00th=[ 6652], 5.00th=[10552], 10.00th=[11731], 20.00th=[12518], 00:09:16.966 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:09:16.966 | 70.00th=[13042], 80.00th=[13173], 90.00th=[13304], 95.00th=[13566], 00:09:16.966 | 99.00th=[14091], 99.50th=[14353], 99.90th=[15270], 99.95th=[15270], 00:09:16.966 | 99.99th=[15270] 00:09:16.966 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:09:16.966 slat (usec): min=9, max=3058, avg=92.68, stdev=397.11 00:09:16.966 clat (usec): min=9551, max=15110, avg=12224.52, stdev=1267.59 00:09:16.966 lat (usec): min=9579, max=15128, avg=12317.20, stdev=1268.09 00:09:16.966 clat percentiles (usec): 00:09:16.966 | 1.00th=[10159], 5.00th=[10552], 10.00th=[10683], 20.00th=[10945], 00:09:16.966 | 30.00th=[11207], 40.00th=[11469], 50.00th=[12256], 60.00th=[12780], 00:09:16.966 | 70.00th=[13304], 80.00th=[13566], 90.00th=[13829], 95.00th=[14091], 00:09:16.966 | 99.00th=[14484], 99.50th=[14615], 99.90th=[15139], 99.95th=[15139], 00:09:16.966 | 99.99th=[15139] 00:09:16.966 bw ( KiB/s): min=20480, max=20521, per=34.79%, avg=20500.50, stdev=28.99, samples=2 00:09:16.966 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:09:16.966 lat (usec) : 500=0.01% 00:09:16.966 lat (msec) : 4=0.31%, 10=1.26%, 20=98.42% 00:09:16.966 cpu : usr=4.80%, sys=13.30%, ctx=503, majf=0, minf=13 00:09:16.966 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:16.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.966 issued rwts: total=5056,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.966 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.966 job2: (groupid=0, jobs=1): err= 0: pid=70313: Tue Nov 19 17:08:09 2024 00:09:16.966 read: IOPS=2386, BW=9547KiB/s (9776kB/s)(9604KiB/1006msec) 00:09:16.966 slat (usec): min=6, max=8027, avg=212.19, stdev=942.96 00:09:16.966 clat (usec): min=462, max=38974, avg=26058.37, stdev=4356.17 00:09:16.966 lat (usec): min=6415, max=38990, avg=26270.56, stdev=4287.25 00:09:16.966 clat percentiles (usec): 00:09:16.966 | 1.00th=[ 6915], 5.00th=[20055], 10.00th=[22152], 20.00th=[23725], 00:09:16.966 | 30.00th=[25035], 40.00th=[25822], 50.00th=[26608], 60.00th=[26870], 00:09:16.966 | 70.00th=[27132], 80.00th=[27919], 90.00th=[29492], 95.00th=[33162], 00:09:16.966 | 99.00th=[35914], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:09:16.966 | 99.99th=[39060] 00:09:16.966 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:09:16.966 slat (usec): min=10, max=6525, avg=184.75, stdev=890.72 00:09:16.966 clat (usec): min=13706, max=37055, avg=24927.09, stdev=3690.40 00:09:16.966 lat (usec): min=15067, max=37079, avg=25111.84, stdev=3620.43 00:09:16.966 clat percentiles (usec): 00:09:16.966 | 1.00th=[17433], 5.00th=[19006], 10.00th=[20579], 20.00th=[21627], 00:09:16.966 | 30.00th=[23462], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:09:16.966 | 70.00th=[26084], 80.00th=[26870], 90.00th=[28443], 95.00th=[32113], 00:09:16.966 | 99.00th=[35390], 99.50th=[35390], 99.90th=[36963], 99.95th=[36963], 00:09:16.966 | 99.99th=[36963] 00:09:16.966 bw ( KiB/s): min= 9416, max=11086, per=17.40%, avg=10251.00, stdev=1180.87, samples=2 00:09:16.966 iops : min= 2354, max= 2771, avg=2562.50, stdev=294.86, samples=2 00:09:16.966 lat (usec) : 500=0.02% 00:09:16.966 lat (msec) : 10=0.65%, 20=5.34%, 50=93.99% 00:09:16.966 cpu : usr=2.09%, sys=8.26%, ctx=210, majf=0, minf=15 00:09:16.966 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:09:16.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.966 issued rwts: total=2401,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.966 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.966 job3: (groupid=0, jobs=1): err= 0: pid=70314: Tue Nov 19 17:08:09 2024 00:09:16.966 read: IOPS=4588, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:09:16.966 slat (usec): min=7, max=8171, avg=106.07, stdev=557.46 00:09:16.966 clat (usec): min=2516, max=20715, avg=14007.04, stdev=1542.08 00:09:16.966 lat (usec): min=2538, max=20734, avg=14113.11, stdev=1605.76 00:09:16.966 clat percentiles (usec): 00:09:16.966 | 1.00th=[ 7242], 5.00th=[11994], 10.00th=[12911], 20.00th=[13566], 00:09:16.967 | 30.00th=[13829], 40.00th=[13960], 50.00th=[14091], 60.00th=[14222], 00:09:16.967 | 70.00th=[14353], 80.00th=[14484], 90.00th=[15270], 95.00th=[15926], 00:09:16.967 | 99.00th=[17957], 99.50th=[18482], 99.90th=[18482], 99.95th=[18482], 00:09:16.967 | 99.99th=[20841] 00:09:16.967 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:09:16.967 slat (usec): min=9, max=5299, avg=103.29, stdev=523.13 00:09:16.967 clat (usec): min=9596, max=22353, avg=13502.55, stdev=1430.99 00:09:16.967 lat (usec): min=9619, max=22376, avg=13605.84, stdev=1419.15 00:09:16.967 clat percentiles (usec): 00:09:16.967 | 1.00th=[10028], 5.00th=[10421], 10.00th=[10945], 20.00th=[13042], 00:09:16.967 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13698], 60.00th=[13960], 00:09:16.967 | 70.00th=[14091], 80.00th=[14353], 90.00th=[14615], 95.00th=[15008], 00:09:16.967 | 99.00th=[18220], 99.50th=[18744], 99.90th=[18744], 99.95th=[20579], 00:09:16.967 | 99.99th=[22414] 00:09:16.967 bw ( KiB/s): min=17600, max=19302, per=31.32%, avg=18451.00, stdev=1203.50, samples=2 00:09:16.967 iops : min= 4400, max= 4825, avg=4612.50, stdev=300.52, samples=2 00:09:16.967 lat (msec) : 4=0.34%, 10=0.91%, 20=98.71%, 50=0.04% 00:09:16.967 cpu : usr=4.59%, sys=12.18%, ctx=318, majf=0, minf=14 00:09:16.967 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:16.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.967 issued rwts: total=4602,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.967 00:09:16.967 Run status group 0 (all jobs): 00:09:16.967 READ: bw=55.7MiB/s (58.4MB/s), 9147KiB/s-19.7MiB/s (9366kB/s-20.7MB/s), io=56.1MiB (58.8MB), run=1001-1008msec 00:09:16.967 WRITE: bw=57.5MiB/s (60.3MB/s), 9.92MiB/s-20.0MiB/s (10.4MB/s-20.9MB/s), io=58.0MiB (60.8MB), run=1001-1008msec 00:09:16.967 00:09:16.967 Disk stats (read/write): 00:09:16.967 nvme0n1: ios=2034/2048, merge=0/0, ticks=13880/11445, in_queue=25325, util=86.57% 00:09:16.967 nvme0n2: ios=4127/4477, merge=0/0, ticks=11974/11928, in_queue=23902, util=87.74% 00:09:16.967 nvme0n3: ios=2048/2144, merge=0/0, ticks=13273/11566, in_queue=24839, util=88.76% 00:09:16.967 nvme0n4: ios=3652/4096, merge=0/0, ticks=15650/15541, in_queue=31191, util=89.10% 00:09:16.967 17:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:16.967 [global] 00:09:16.967 thread=1 00:09:16.967 invalidate=1 00:09:16.967 rw=randwrite 00:09:16.967 time_based=1 00:09:16.967 runtime=1 00:09:16.967 ioengine=libaio 00:09:16.967 direct=1 00:09:16.967 bs=4096 00:09:16.967 iodepth=128 00:09:16.967 norandommap=0 00:09:16.967 numjobs=1 00:09:16.967 00:09:16.967 verify_dump=1 00:09:16.967 verify_backlog=512 00:09:16.967 verify_state_save=0 00:09:16.967 do_verify=1 00:09:16.967 verify=crc32c-intel 00:09:16.967 [job0] 00:09:16.967 filename=/dev/nvme0n1 00:09:16.967 [job1] 00:09:16.967 filename=/dev/nvme0n2 00:09:16.967 [job2] 00:09:16.967 filename=/dev/nvme0n3 00:09:16.967 [job3] 00:09:16.967 filename=/dev/nvme0n4 00:09:16.967 Could not set queue depth (nvme0n1) 00:09:16.967 Could not set queue depth (nvme0n2) 00:09:16.967 Could not set queue depth (nvme0n3) 00:09:16.967 Could not set queue depth (nvme0n4) 00:09:16.967 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:16.967 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:16.967 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:16.967 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:16.967 fio-3.35 00:09:16.967 Starting 4 threads 00:09:18.343 00:09:18.344 job0: (groupid=0, jobs=1): err= 0: pid=70367: Tue Nov 19 17:08:10 2024 00:09:18.344 read: IOPS=2793, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1003msec) 00:09:18.344 slat (usec): min=5, max=5564, avg=170.86, stdev=808.91 00:09:18.344 clat (usec): min=839, max=27075, avg=21581.77, stdev=2807.61 00:09:18.344 lat (usec): min=2307, max=28124, avg=21752.63, stdev=2752.45 00:09:18.344 clat percentiles (usec): 00:09:18.344 | 1.00th=[ 6456], 5.00th=[17695], 10.00th=[19006], 20.00th=[20055], 00:09:18.344 | 30.00th=[20841], 40.00th=[21627], 50.00th=[22152], 60.00th=[22414], 00:09:18.344 | 70.00th=[22938], 80.00th=[23462], 90.00th=[24249], 95.00th=[24773], 00:09:18.344 | 99.00th=[25560], 99.50th=[26084], 99.90th=[27132], 99.95th=[27132], 00:09:18.344 | 99.99th=[27132] 00:09:18.344 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:09:18.344 slat (usec): min=11, max=5190, avg=161.88, stdev=659.01 00:09:18.344 clat (usec): min=14845, max=26045, avg=21404.73, stdev=1642.13 00:09:18.344 lat (usec): min=16554, max=26072, avg=21566.61, stdev=1539.25 00:09:18.344 clat percentiles (usec): 00:09:18.344 | 1.00th=[16909], 5.00th=[18220], 10.00th=[19006], 20.00th=[20317], 00:09:18.344 | 30.00th=[20841], 40.00th=[21365], 50.00th=[21627], 60.00th=[21890], 00:09:18.344 | 70.00th=[22152], 80.00th=[22414], 90.00th=[23200], 95.00th=[23987], 00:09:18.344 | 99.00th=[25297], 99.50th=[25560], 99.90th=[26084], 99.95th=[26084], 00:09:18.344 | 99.99th=[26084] 00:09:18.344 bw ( KiB/s): min=12288, max=12312, per=22.77%, avg=12300.00, stdev=16.97, samples=2 00:09:18.344 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:09:18.344 lat (usec) : 1000=0.02% 00:09:18.344 lat (msec) : 4=0.10%, 10=0.44%, 20=16.39%, 50=83.04% 00:09:18.344 cpu : usr=3.59%, sys=7.78%, ctx=335, majf=0, minf=5 00:09:18.344 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:09:18.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:18.344 issued rwts: total=2802,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.344 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:18.344 job1: (groupid=0, jobs=1): err= 0: pid=70368: Tue Nov 19 17:08:10 2024 00:09:18.344 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:09:18.344 slat (usec): min=7, max=5641, avg=131.83, stdev=686.92 00:09:18.344 clat (usec): min=12688, max=24150, avg=17513.39, stdev=1494.37 00:09:18.344 lat (usec): min=12705, max=24503, avg=17645.22, stdev=1588.00 00:09:18.344 clat percentiles (usec): 00:09:18.344 | 1.00th=[13566], 5.00th=[15008], 10.00th=[15926], 20.00th=[16450], 00:09:18.344 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17433], 60.00th=[17695], 00:09:18.344 | 70.00th=[18220], 80.00th=[18744], 90.00th=[19268], 95.00th=[19792], 00:09:18.344 | 99.00th=[21627], 99.50th=[22676], 99.90th=[23725], 99.95th=[23725], 00:09:18.344 | 99.99th=[24249] 00:09:18.344 write: IOPS=3756, BW=14.7MiB/s (15.4MB/s)(14.7MiB/1002msec); 0 zone resets 00:09:18.344 slat (usec): min=11, max=5662, avg=131.34, stdev=638.51 00:09:18.344 clat (usec): min=617, max=23649, avg=16880.59, stdev=2036.13 00:09:18.344 lat (usec): min=5361, max=23684, avg=17011.93, stdev=2043.30 00:09:18.344 clat percentiles (usec): 00:09:18.344 | 1.00th=[ 6390], 5.00th=[12911], 10.00th=[14353], 20.00th=[16188], 00:09:18.344 | 30.00th=[16581], 40.00th=[16909], 50.00th=[17433], 60.00th=[17433], 00:09:18.344 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18482], 95.00th=[19006], 00:09:18.344 | 99.00th=[21627], 99.50th=[22152], 99.90th=[22676], 99.95th=[22676], 00:09:18.344 | 99.99th=[23725] 00:09:18.344 bw ( KiB/s): min=12704, max=16416, per=26.95%, avg=14560.00, stdev=2624.78, samples=2 00:09:18.344 iops : min= 3176, max= 4104, avg=3640.00, stdev=656.20, samples=2 00:09:18.344 lat (usec) : 750=0.01% 00:09:18.344 lat (msec) : 10=0.57%, 20=96.20%, 50=3.21% 00:09:18.344 cpu : usr=4.40%, sys=11.19%, ctx=297, majf=0, minf=5 00:09:18.344 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:09:18.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:18.344 issued rwts: total=3584,3764,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.344 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:18.344 job2: (groupid=0, jobs=1): err= 0: pid=70369: Tue Nov 19 17:08:10 2024 00:09:18.344 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:09:18.344 slat (usec): min=3, max=5579, avg=153.87, stdev=761.16 00:09:18.344 clat (usec): min=14451, max=25849, avg=20348.87, stdev=1461.02 00:09:18.344 lat (usec): min=15771, max=26242, avg=20502.74, stdev=1290.50 00:09:18.344 clat percentiles (usec): 00:09:18.344 | 1.00th=[15795], 5.00th=[17171], 10.00th=[19006], 20.00th=[19530], 00:09:18.344 | 30.00th=[20055], 40.00th=[20317], 50.00th=[20317], 60.00th=[20579], 00:09:18.344 | 70.00th=[20841], 80.00th=[21365], 90.00th=[21890], 95.00th=[22152], 00:09:18.344 | 99.00th=[24773], 99.50th=[25560], 99.90th=[25822], 99.95th=[25822], 00:09:18.344 | 99.99th=[25822] 00:09:18.344 write: IOPS=3292, BW=12.9MiB/s (13.5MB/s)(12.9MiB/1004msec); 0 zone resets 00:09:18.344 slat (usec): min=13, max=5338, avg=151.84, stdev=666.02 00:09:18.344 clat (usec): min=207, max=25299, avg=19395.82, stdev=2866.30 00:09:18.344 lat (usec): min=3997, max=25321, avg=19547.66, stdev=2847.69 00:09:18.344 clat percentiles (usec): 00:09:18.344 | 1.00th=[ 5080], 5.00th=[15926], 10.00th=[16450], 20.00th=[16909], 00:09:18.344 | 30.00th=[17957], 40.00th=[19268], 50.00th=[20317], 60.00th=[20579], 00:09:18.344 | 70.00th=[21103], 80.00th=[21365], 90.00th=[22414], 95.00th=[22676], 00:09:18.344 | 99.00th=[23725], 99.50th=[25035], 99.90th=[25297], 99.95th=[25297], 00:09:18.344 | 99.99th=[25297] 00:09:18.344 bw ( KiB/s): min=12288, max=13136, per=23.53%, avg=12712.00, stdev=599.63, samples=2 00:09:18.344 iops : min= 3072, max= 3284, avg=3178.00, stdev=149.91, samples=2 00:09:18.344 lat (usec) : 250=0.02% 00:09:18.344 lat (msec) : 10=0.69%, 20=38.65%, 50=60.65% 00:09:18.344 cpu : usr=3.39%, sys=9.77%, ctx=316, majf=0, minf=5 00:09:18.344 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:18.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:18.344 issued rwts: total=3072,3306,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.344 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:18.344 job3: (groupid=0, jobs=1): err= 0: pid=70370: Tue Nov 19 17:08:10 2024 00:09:18.344 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:09:18.344 slat (usec): min=6, max=5334, avg=150.62, stdev=744.52 00:09:18.344 clat (usec): min=14244, max=23550, avg=20097.33, stdev=1581.38 00:09:18.344 lat (usec): min=14684, max=26502, avg=20247.94, stdev=1429.86 00:09:18.344 clat percentiles (usec): 00:09:18.344 | 1.00th=[15533], 5.00th=[16712], 10.00th=[18482], 20.00th=[19006], 00:09:18.344 | 30.00th=[19530], 40.00th=[19792], 50.00th=[20055], 60.00th=[20317], 00:09:18.344 | 70.00th=[20841], 80.00th=[21365], 90.00th=[22152], 95.00th=[22414], 00:09:18.344 | 99.00th=[23462], 99.50th=[23462], 99.90th=[23462], 99.95th=[23462], 00:09:18.344 | 99.99th=[23462] 00:09:18.344 write: IOPS=3407, BW=13.3MiB/s (14.0MB/s)(13.4MiB/1003msec); 0 zone resets 00:09:18.344 slat (usec): min=12, max=5142, avg=149.26, stdev=699.00 00:09:18.344 clat (usec): min=340, max=23490, avg=18879.37, stdev=2841.07 00:09:18.344 lat (usec): min=4240, max=23514, avg=19028.63, stdev=2828.39 00:09:18.344 clat percentiles (usec): 00:09:18.344 | 1.00th=[ 9372], 5.00th=[15401], 10.00th=[15926], 20.00th=[16909], 00:09:18.344 | 30.00th=[17171], 40.00th=[17695], 50.00th=[19006], 60.00th=[20055], 00:09:18.344 | 70.00th=[20841], 80.00th=[21627], 90.00th=[22152], 95.00th=[22414], 00:09:18.344 | 99.00th=[23200], 99.50th=[23462], 99.90th=[23462], 99.95th=[23462], 00:09:18.344 | 99.99th=[23462] 00:09:18.344 bw ( KiB/s): min=12368, max=13976, per=24.38%, avg=13172.00, stdev=1137.03, samples=2 00:09:18.344 iops : min= 3092, max= 3494, avg=3293.00, stdev=284.26, samples=2 00:09:18.344 lat (usec) : 500=0.02% 00:09:18.344 lat (msec) : 10=0.71%, 20=51.79%, 50=47.49% 00:09:18.344 cpu : usr=3.39%, sys=9.98%, ctx=309, majf=0, minf=3 00:09:18.344 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:18.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:18.344 issued rwts: total=3072,3418,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.344 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:18.344 00:09:18.344 Run status group 0 (all jobs): 00:09:18.344 READ: bw=48.8MiB/s (51.1MB/s), 10.9MiB/s-14.0MiB/s (11.4MB/s-14.7MB/s), io=48.9MiB (51.3MB), run=1002-1004msec 00:09:18.344 WRITE: bw=52.8MiB/s (55.3MB/s), 12.0MiB/s-14.7MiB/s (12.5MB/s-15.4MB/s), io=53.0MiB (55.5MB), run=1002-1004msec 00:09:18.344 00:09:18.344 Disk stats (read/write): 00:09:18.344 nvme0n1: ios=2510/2560, merge=0/0, ticks=13037/12431, in_queue=25468, util=88.47% 00:09:18.344 nvme0n2: ios=3121/3247, merge=0/0, ticks=16482/16114, in_queue=32596, util=89.90% 00:09:18.344 nvme0n3: ios=2595/2920, merge=0/0, ticks=12356/13041, in_queue=25397, util=90.28% 00:09:18.344 nvme0n4: ios=2560/3033, merge=0/0, ticks=12096/13058, in_queue=25154, util=89.69% 00:09:18.344 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:18.344 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=70385 00:09:18.344 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:18.344 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:18.344 [global] 00:09:18.344 thread=1 00:09:18.344 invalidate=1 00:09:18.344 rw=read 00:09:18.344 time_based=1 00:09:18.344 runtime=10 00:09:18.344 ioengine=libaio 00:09:18.344 direct=1 00:09:18.344 bs=4096 00:09:18.344 iodepth=1 00:09:18.344 norandommap=1 00:09:18.344 numjobs=1 00:09:18.344 00:09:18.344 [job0] 00:09:18.344 filename=/dev/nvme0n1 00:09:18.344 [job1] 00:09:18.344 filename=/dev/nvme0n2 00:09:18.344 [job2] 00:09:18.344 filename=/dev/nvme0n3 00:09:18.344 [job3] 00:09:18.344 filename=/dev/nvme0n4 00:09:18.344 Could not set queue depth (nvme0n1) 00:09:18.344 Could not set queue depth (nvme0n2) 00:09:18.344 Could not set queue depth (nvme0n3) 00:09:18.344 Could not set queue depth (nvme0n4) 00:09:18.344 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.344 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.344 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.344 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.344 fio-3.35 00:09:18.344 Starting 4 threads 00:09:21.689 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:21.689 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=49451008, buflen=4096 00:09:21.689 fio: pid=70433, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:21.689 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:21.952 fio: pid=70432, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:21.952 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=32407552, buflen=4096 00:09:21.952 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:21.952 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:22.210 fio: pid=70430, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:22.210 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=38854656, buflen=4096 00:09:22.210 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:22.211 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:22.470 fio: pid=70431, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:22.470 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=55066624, buflen=4096 00:09:22.470 00:09:22.470 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70430: Tue Nov 19 17:08:14 2024 00:09:22.470 read: IOPS=2667, BW=10.4MiB/s (10.9MB/s)(37.1MiB/3556msec) 00:09:22.470 slat (usec): min=10, max=12452, avg=28.56, stdev=205.74 00:09:22.470 clat (usec): min=143, max=3175, avg=344.15, stdev=118.74 00:09:22.470 lat (usec): min=157, max=12777, avg=372.71, stdev=238.42 00:09:22.470 clat percentiles (usec): 00:09:22.470 | 1.00th=[ 155], 5.00th=[ 165], 10.00th=[ 176], 20.00th=[ 277], 00:09:22.470 | 30.00th=[ 318], 40.00th=[ 338], 50.00th=[ 355], 60.00th=[ 371], 00:09:22.470 | 70.00th=[ 392], 80.00th=[ 416], 90.00th=[ 449], 95.00th=[ 474], 00:09:22.470 | 99.00th=[ 562], 99.50th=[ 619], 99.90th=[ 1385], 99.95th=[ 2311], 00:09:22.470 | 99.99th=[ 3163] 00:09:22.470 bw ( KiB/s): min= 9601, max= 9864, per=22.01%, avg=9772.17, stdev=97.60, samples=6 00:09:22.470 iops : min= 2400, max= 2466, avg=2443.00, stdev=24.49, samples=6 00:09:22.470 lat (usec) : 250=18.43%, 500=79.23%, 750=2.09%, 1000=0.11% 00:09:22.470 lat (msec) : 2=0.07%, 4=0.06% 00:09:22.470 cpu : usr=1.24%, sys=5.49%, ctx=9496, majf=0, minf=1 00:09:22.470 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:22.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.470 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.470 issued rwts: total=9487,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.470 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:22.470 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70431: Tue Nov 19 17:08:14 2024 00:09:22.470 read: IOPS=3477, BW=13.6MiB/s (14.2MB/s)(52.5MiB/3866msec) 00:09:22.470 slat (usec): min=11, max=11575, avg=19.42, stdev=204.65 00:09:22.470 clat (usec): min=124, max=242205, avg=266.80, stdev=2090.36 00:09:22.470 lat (usec): min=137, max=242231, avg=286.22, stdev=2100.40 00:09:22.470 clat percentiles (usec): 00:09:22.470 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 169], 20.00th=[ 223], 00:09:22.470 | 30.00th=[ 235], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 260], 00:09:22.470 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 306], 00:09:22.470 | 99.00th=[ 334], 99.50th=[ 351], 99.90th=[ 1401], 99.95th=[ 3687], 00:09:22.470 | 99.99th=[ 7898] 00:09:22.470 bw ( KiB/s): min= 8348, max=14656, per=30.34%, avg=13473.57, stdev=2286.78, samples=7 00:09:22.470 iops : min= 2087, max= 3664, avg=3368.29, stdev=571.66, samples=7 00:09:22.470 lat (usec) : 250=48.27%, 500=51.45%, 750=0.09%, 1000=0.04% 00:09:22.470 lat (msec) : 2=0.06%, 4=0.04%, 10=0.02%, 250=0.01% 00:09:22.470 cpu : usr=1.09%, sys=4.27%, ctx=13458, majf=0, minf=1 00:09:22.470 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:22.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.470 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.470 issued rwts: total=13445,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.470 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:22.470 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70432: Tue Nov 19 17:08:14 2024 00:09:22.470 read: IOPS=2449, BW=9795KiB/s (10.0MB/s)(30.9MiB/3231msec) 00:09:22.470 slat (usec): min=8, max=11166, avg=22.03, stdev=158.42 00:09:22.470 clat (usec): min=158, max=3239, avg=384.39, stdev=78.60 00:09:22.470 lat (usec): min=178, max=11542, avg=406.42, stdev=177.19 00:09:22.470 clat percentiles (usec): 00:09:22.470 | 1.00th=[ 212], 5.00th=[ 302], 10.00th=[ 318], 20.00th=[ 338], 00:09:22.470 | 30.00th=[ 351], 40.00th=[ 363], 50.00th=[ 379], 60.00th=[ 392], 00:09:22.470 | 70.00th=[ 408], 80.00th=[ 433], 90.00th=[ 465], 95.00th=[ 486], 00:09:22.470 | 99.00th=[ 553], 99.50th=[ 685], 99.90th=[ 1123], 99.95th=[ 1303], 00:09:22.470 | 99.99th=[ 3228] 00:09:22.470 bw ( KiB/s): min= 9672, max=10247, per=22.27%, avg=9889.17, stdev=190.65, samples=6 00:09:22.470 iops : min= 2418, max= 2561, avg=2472.17, stdev=47.38, samples=6 00:09:22.470 lat (usec) : 250=2.07%, 500=94.92%, 750=2.68%, 1000=0.14% 00:09:22.470 lat (msec) : 2=0.16%, 4=0.01% 00:09:22.470 cpu : usr=0.99%, sys=3.84%, ctx=7916, majf=0, minf=2 00:09:22.470 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:22.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.470 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.470 issued rwts: total=7913,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.470 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:22.470 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70433: Tue Nov 19 17:08:14 2024 00:09:22.470 read: IOPS=4076, BW=15.9MiB/s (16.7MB/s)(47.2MiB/2962msec) 00:09:22.470 slat (usec): min=12, max=104, avg=16.10, stdev= 3.49 00:09:22.470 clat (usec): min=147, max=6780, avg=227.89, stdev=107.26 00:09:22.470 lat (usec): min=170, max=6794, avg=243.99, stdev=107.44 00:09:22.470 clat percentiles (usec): 00:09:22.470 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 192], 00:09:22.470 | 30.00th=[ 200], 40.00th=[ 210], 50.00th=[ 221], 60.00th=[ 231], 00:09:22.470 | 70.00th=[ 245], 80.00th=[ 258], 90.00th=[ 281], 95.00th=[ 297], 00:09:22.470 | 99.00th=[ 322], 99.50th=[ 330], 99.90th=[ 914], 99.95th=[ 2278], 00:09:22.470 | 99.99th=[ 6587] 00:09:22.470 bw ( KiB/s): min=15944, max=16696, per=36.68%, avg=16289.60, stdev=326.45, samples=5 00:09:22.470 iops : min= 3986, max= 4174, avg=4072.40, stdev=81.61, samples=5 00:09:22.470 lat (usec) : 250=74.96%, 500=24.89%, 750=0.01%, 1000=0.05% 00:09:22.470 lat (msec) : 2=0.02%, 4=0.04%, 10=0.02% 00:09:22.470 cpu : usr=1.22%, sys=5.37%, ctx=12085, majf=0, minf=2 00:09:22.470 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:22.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.470 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.470 issued rwts: total=12074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.470 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:22.470 00:09:22.470 Run status group 0 (all jobs): 00:09:22.470 READ: bw=43.4MiB/s (45.5MB/s), 9795KiB/s-15.9MiB/s (10.0MB/s-16.7MB/s), io=168MiB (176MB), run=2962-3866msec 00:09:22.470 00:09:22.470 Disk stats (read/write): 00:09:22.470 nvme0n1: ios=8487/0, merge=0/0, ticks=3112/0, in_queue=3112, util=95.19% 00:09:22.470 nvme0n2: ios=13444/0, merge=0/0, ticks=3637/0, in_queue=3637, util=95.41% 00:09:22.470 nvme0n3: ios=7651/0, merge=0/0, ticks=2982/0, in_queue=2982, util=96.21% 00:09:22.470 nvme0n4: ios=11676/0, merge=0/0, ticks=2736/0, in_queue=2736, util=96.42% 00:09:22.470 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:22.470 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:22.729 17:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:22.729 17:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:23.296 17:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:23.296 17:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:23.555 17:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:23.555 17:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:23.815 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:23.815 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:24.074 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:24.074 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 70385 00:09:24.074 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:24.074 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:24.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.074 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:24.074 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:24.074 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:24.074 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.074 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:24.074 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.074 nvmf hotplug test: fio failed as expected 00:09:24.074 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:24.074 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:24.074 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:24.074 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.333 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:24.333 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:24.333 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:24.333 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:24.333 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:24.333 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:24.333 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:24.333 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:24.333 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:24.333 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:24.333 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:24.333 rmmod nvme_tcp 00:09:24.333 rmmod nvme_fabrics 00:09:24.333 rmmod nvme_keyring 00:09:24.333 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:24.333 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:24.333 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:24.333 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 69888 ']' 00:09:24.333 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 69888 00:09:24.333 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 69888 ']' 00:09:24.333 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 69888 00:09:24.333 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:24.333 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.333 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69888 00:09:24.592 killing process with pid 69888 00:09:24.592 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.592 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.592 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69888' 00:09:24.592 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 69888 00:09:24.592 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 69888 00:09:24.851 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:24.851 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:24.851 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:24.851 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:24.851 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:24.851 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:24.851 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:24.851 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:24.851 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:24.851 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:24.851 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:24.851 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:24.851 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:24.851 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:24.851 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:24.851 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:24.851 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:24.851 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:24.851 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:24.851 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:24.851 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:24.851 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:24.851 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:24.851 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.851 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.851 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.111 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:09:25.111 00:09:25.111 real 0m21.309s 00:09:25.111 user 1m21.698s 00:09:25.111 sys 0m8.765s 00:09:25.111 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.111 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.111 ************************************ 00:09:25.111 END TEST nvmf_fio_target 00:09:25.111 ************************************ 00:09:25.111 17:08:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:25.111 17:08:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:25.111 17:08:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.111 17:08:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:25.111 ************************************ 00:09:25.111 START TEST nvmf_bdevio 00:09:25.111 ************************************ 00:09:25.111 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:25.111 * Looking for test storage... 00:09:25.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:25.111 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:25.111 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:25.111 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:25.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.371 --rc genhtml_branch_coverage=1 00:09:25.371 --rc genhtml_function_coverage=1 00:09:25.371 --rc genhtml_legend=1 00:09:25.371 --rc geninfo_all_blocks=1 00:09:25.371 --rc geninfo_unexecuted_blocks=1 00:09:25.371 00:09:25.371 ' 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:25.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.371 --rc genhtml_branch_coverage=1 00:09:25.371 --rc genhtml_function_coverage=1 00:09:25.371 --rc genhtml_legend=1 00:09:25.371 --rc geninfo_all_blocks=1 00:09:25.371 --rc geninfo_unexecuted_blocks=1 00:09:25.371 00:09:25.371 ' 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:25.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.371 --rc genhtml_branch_coverage=1 00:09:25.371 --rc genhtml_function_coverage=1 00:09:25.371 --rc genhtml_legend=1 00:09:25.371 --rc geninfo_all_blocks=1 00:09:25.371 --rc geninfo_unexecuted_blocks=1 00:09:25.371 00:09:25.371 ' 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:25.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.371 --rc genhtml_branch_coverage=1 00:09:25.371 --rc genhtml_function_coverage=1 00:09:25.371 --rc genhtml_legend=1 00:09:25.371 --rc geninfo_all_blocks=1 00:09:25.371 --rc geninfo_unexecuted_blocks=1 00:09:25.371 00:09:25.371 ' 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.371 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:25.372 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:25.372 Cannot find device "nvmf_init_br" 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:25.372 Cannot find device "nvmf_init_br2" 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:25.372 Cannot find device "nvmf_tgt_br" 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:25.372 Cannot find device "nvmf_tgt_br2" 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:25.372 Cannot find device "nvmf_init_br" 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:25.372 Cannot find device "nvmf_init_br2" 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:25.372 Cannot find device "nvmf_tgt_br" 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:25.372 Cannot find device "nvmf_tgt_br2" 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:25.372 Cannot find device "nvmf_br" 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:25.372 Cannot find device "nvmf_init_if" 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:25.372 Cannot find device "nvmf_init_if2" 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:25.372 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:25.372 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:25.372 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:25.631 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:25.631 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:25.631 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:25.631 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:25.631 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:25.631 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:25.631 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:25.631 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:25.631 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:25.631 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:25.631 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:25.631 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:25.631 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:25.632 17:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:25.632 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:25.632 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.117 ms 00:09:25.632 00:09:25.632 --- 10.0.0.3 ping statistics --- 00:09:25.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.632 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:25.632 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:25.632 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.093 ms 00:09:25.632 00:09:25.632 --- 10.0.0.4 ping statistics --- 00:09:25.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.632 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:25.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:25.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:09:25.632 00:09:25.632 --- 10.0.0.1 ping statistics --- 00:09:25.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.632 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:25.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:25.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:09:25.632 00:09:25.632 --- 10.0.0.2 ping statistics --- 00:09:25.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.632 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:25.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=70813 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 70813 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 70813 ']' 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.632 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:25.891 [2024-11-19 17:08:18.192609] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:09:25.891 [2024-11-19 17:08:18.192773] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.891 [2024-11-19 17:08:18.343525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:25.891 [2024-11-19 17:08:18.416543] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.891 [2024-11-19 17:08:18.416938] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.891 [2024-11-19 17:08:18.417035] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:25.891 [2024-11-19 17:08:18.417221] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:25.891 [2024-11-19 17:08:18.417328] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.891 [2024-11-19 17:08:18.418643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:25.891 [2024-11-19 17:08:18.418788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:25.891 [2024-11-19 17:08:18.418937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:25.891 [2024-11-19 17:08:18.418944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.828 [2024-11-19 17:08:19.254582] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.828 Malloc0 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.828 [2024-11-19 17:08:19.325222] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:26.828 { 00:09:26.828 "params": { 00:09:26.828 "name": "Nvme$subsystem", 00:09:26.828 "trtype": "$TEST_TRANSPORT", 00:09:26.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:26.828 "adrfam": "ipv4", 00:09:26.828 "trsvcid": "$NVMF_PORT", 00:09:26.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:26.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:26.828 "hdgst": ${hdgst:-false}, 00:09:26.828 "ddgst": ${ddgst:-false} 00:09:26.828 }, 00:09:26.828 "method": "bdev_nvme_attach_controller" 00:09:26.828 } 00:09:26.828 EOF 00:09:26.828 )") 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:26.828 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:26.828 "params": { 00:09:26.828 "name": "Nvme1", 00:09:26.828 "trtype": "tcp", 00:09:26.828 "traddr": "10.0.0.3", 00:09:26.828 "adrfam": "ipv4", 00:09:26.828 "trsvcid": "4420", 00:09:26.828 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:26.828 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:26.828 "hdgst": false, 00:09:26.828 "ddgst": false 00:09:26.828 }, 00:09:26.828 "method": "bdev_nvme_attach_controller" 00:09:26.828 }' 00:09:27.086 [2024-11-19 17:08:19.402943] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:09:27.086 [2024-11-19 17:08:19.403975] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70874 ] 00:09:27.086 [2024-11-19 17:08:19.565967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:27.345 [2024-11-19 17:08:19.656184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.345 [2024-11-19 17:08:19.656339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.345 [2024-11-19 17:08:19.656350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.603 I/O targets: 00:09:27.603 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:27.603 00:09:27.603 00:09:27.603 CUnit - A unit testing framework for C - Version 2.1-3 00:09:27.603 http://cunit.sourceforge.net/ 00:09:27.603 00:09:27.603 00:09:27.603 Suite: bdevio tests on: Nvme1n1 00:09:27.603 Test: blockdev write read block ...passed 00:09:27.603 Test: blockdev write zeroes read block ...passed 00:09:27.603 Test: blockdev write zeroes read no split ...passed 00:09:27.603 Test: blockdev write zeroes read split ...passed 00:09:27.603 Test: blockdev write zeroes read split partial ...passed 00:09:27.603 Test: blockdev reset ...[2024-11-19 17:08:20.016434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:27.603 [2024-11-19 17:08:20.016953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e4f50 (9): Bad file descriptor 00:09:27.603 [2024-11-19 17:08:20.032006] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:27.603 passed 00:09:27.603 Test: blockdev write read 8 blocks ...passed 00:09:27.603 Test: blockdev write read size > 128k ...passed 00:09:27.603 Test: blockdev write read invalid size ...passed 00:09:27.603 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:27.603 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:27.603 Test: blockdev write read max offset ...passed 00:09:27.862 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:27.862 Test: blockdev writev readv 8 blocks ...passed 00:09:27.862 Test: blockdev writev readv 30 x 1block ...passed 00:09:27.862 Test: blockdev writev readv block ...passed 00:09:27.862 Test: blockdev writev readv size > 128k ...passed 00:09:27.862 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:27.862 Test: blockdev comparev and writev ...[2024-11-19 17:08:20.209340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.862 [2024-11-19 17:08:20.209572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:27.862 [2024-11-19 17:08:20.209699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.862 [2024-11-19 17:08:20.209718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:27.862 [2024-11-19 17:08:20.210128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.862 [2024-11-19 17:08:20.210147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:27.862 [2024-11-19 17:08:20.210164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.862 [2024-11-19 17:08:20.210174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:27.862 [2024-11-19 17:08:20.210520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.862 [2024-11-19 17:08:20.210538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:27.862 [2024-11-19 17:08:20.210554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.862 [2024-11-19 17:08:20.210565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:27.862 [2024-11-19 17:08:20.210912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.862 [2024-11-19 17:08:20.210929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:27.862 [2024-11-19 17:08:20.210946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.862 [2024-11-19 17:08:20.210956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:27.862 passed 00:09:27.862 Test: blockdev nvme passthru rw ...passed 00:09:27.862 Test: blockdev nvme passthru vendor specific ...passed 00:09:27.862 Test: blockdev nvme admin passthru ...[2024-11-19 17:08:20.295733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:27.862 [2024-11-19 17:08:20.295820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:27.862 [2024-11-19 17:08:20.295971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:27.862 [2024-11-19 17:08:20.295989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:27.862 [2024-11-19 17:08:20.296121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:27.862 [2024-11-19 17:08:20.296136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:27.862 [2024-11-19 17:08:20.296262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:27.862 [2024-11-19 17:08:20.296277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:27.862 passed 00:09:27.862 Test: blockdev copy ...passed 00:09:27.862 00:09:27.862 Run Summary: Type Total Ran Passed Failed Inactive 00:09:27.862 suites 1 1 n/a 0 0 00:09:27.862 tests 23 23 23 0 0 00:09:27.862 asserts 152 152 152 0 n/a 00:09:27.862 00:09:27.862 Elapsed time = 0.899 seconds 00:09:28.121 17:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.121 17:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.121 17:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:28.381 17:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.381 17:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:28.381 17:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:28.381 17:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:28.381 17:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:28.381 17:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:28.381 17:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:28.381 17:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:28.381 17:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:28.381 rmmod nvme_tcp 00:09:28.381 rmmod nvme_fabrics 00:09:28.381 rmmod nvme_keyring 00:09:28.381 17:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:28.381 17:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:28.381 17:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:28.381 17:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 70813 ']' 00:09:28.381 17:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 70813 00:09:28.381 17:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 70813 ']' 00:09:28.381 17:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 70813 00:09:28.381 17:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:28.381 17:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.381 17:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70813 00:09:28.381 killing process with pid 70813 00:09:28.381 17:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:28.381 17:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:28.381 17:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70813' 00:09:28.381 17:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 70813 00:09:28.381 17:08:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 70813 00:09:28.641 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:28.641 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:28.641 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:28.641 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:28.641 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:28.641 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:28.641 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:28.641 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:28.641 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:28.641 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:28.641 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:28.641 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:28.641 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:28.641 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:28.641 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:28.641 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:28.641 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:28.641 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:28.900 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:28.900 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:28.900 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:28.900 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:28.900 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:28.900 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.900 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.900 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.900 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:09:28.900 00:09:28.900 real 0m3.826s 00:09:28.900 user 0m13.068s 00:09:28.900 sys 0m0.998s 00:09:28.900 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.900 17:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:28.900 ************************************ 00:09:28.900 END TEST nvmf_bdevio 00:09:28.900 ************************************ 00:09:28.900 17:08:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:28.900 00:09:28.900 real 3m38.966s 00:09:28.900 user 11m29.030s 00:09:28.900 sys 1m3.702s 00:09:28.900 17:08:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.900 17:08:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:28.900 ************************************ 00:09:28.900 END TEST nvmf_target_core 00:09:28.900 ************************************ 00:09:28.900 17:08:21 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:28.900 17:08:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:28.900 17:08:21 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.900 17:08:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:28.900 ************************************ 00:09:28.900 START TEST nvmf_target_extra 00:09:28.900 ************************************ 00:09:28.901 17:08:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:29.160 * Looking for test storage... 00:09:29.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:29.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.160 --rc genhtml_branch_coverage=1 00:09:29.160 --rc genhtml_function_coverage=1 00:09:29.160 --rc genhtml_legend=1 00:09:29.160 --rc geninfo_all_blocks=1 00:09:29.160 --rc geninfo_unexecuted_blocks=1 00:09:29.160 00:09:29.160 ' 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:29.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.160 --rc genhtml_branch_coverage=1 00:09:29.160 --rc genhtml_function_coverage=1 00:09:29.160 --rc genhtml_legend=1 00:09:29.160 --rc geninfo_all_blocks=1 00:09:29.160 --rc geninfo_unexecuted_blocks=1 00:09:29.160 00:09:29.160 ' 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:29.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.160 --rc genhtml_branch_coverage=1 00:09:29.160 --rc genhtml_function_coverage=1 00:09:29.160 --rc genhtml_legend=1 00:09:29.160 --rc geninfo_all_blocks=1 00:09:29.160 --rc geninfo_unexecuted_blocks=1 00:09:29.160 00:09:29.160 ' 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:29.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.160 --rc genhtml_branch_coverage=1 00:09:29.160 --rc genhtml_function_coverage=1 00:09:29.160 --rc genhtml_legend=1 00:09:29.160 --rc geninfo_all_blocks=1 00:09:29.160 --rc geninfo_unexecuted_blocks=1 00:09:29.160 00:09:29.160 ' 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.160 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:29.161 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:29.161 ************************************ 00:09:29.161 START TEST nvmf_example 00:09:29.161 ************************************ 00:09:29.161 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:29.422 * Looking for test storage... 00:09:29.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:29.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.422 --rc genhtml_branch_coverage=1 00:09:29.422 --rc genhtml_function_coverage=1 00:09:29.422 --rc genhtml_legend=1 00:09:29.422 --rc geninfo_all_blocks=1 00:09:29.422 --rc geninfo_unexecuted_blocks=1 00:09:29.422 00:09:29.422 ' 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:29.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.422 --rc genhtml_branch_coverage=1 00:09:29.422 --rc genhtml_function_coverage=1 00:09:29.422 --rc genhtml_legend=1 00:09:29.422 --rc geninfo_all_blocks=1 00:09:29.422 --rc geninfo_unexecuted_blocks=1 00:09:29.422 00:09:29.422 ' 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:29.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.422 --rc genhtml_branch_coverage=1 00:09:29.422 --rc genhtml_function_coverage=1 00:09:29.422 --rc genhtml_legend=1 00:09:29.422 --rc geninfo_all_blocks=1 00:09:29.422 --rc geninfo_unexecuted_blocks=1 00:09:29.422 00:09:29.422 ' 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:29.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.422 --rc genhtml_branch_coverage=1 00:09:29.422 --rc genhtml_function_coverage=1 00:09:29.422 --rc genhtml_legend=1 00:09:29.422 --rc geninfo_all_blocks=1 00:09:29.422 --rc geninfo_unexecuted_blocks=1 00:09:29.422 00:09:29.422 ' 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.422 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:29.423 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:29.423 Cannot find device "nvmf_init_br" 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:29.423 Cannot find device "nvmf_init_br2" 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:29.423 Cannot find device "nvmf_tgt_br" 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # true 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:29.423 Cannot find device "nvmf_tgt_br2" 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # true 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:29.423 Cannot find device "nvmf_init_br" 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # true 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:29.423 Cannot find device "nvmf_init_br2" 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # true 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:29.423 Cannot find device "nvmf_tgt_br" 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # true 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:29.423 Cannot find device "nvmf_tgt_br2" 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # true 00:09:29.423 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:29.682 Cannot find device "nvmf_br" 00:09:29.682 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # true 00:09:29.682 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:29.683 Cannot find device "nvmf_init_if" 00:09:29.683 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # true 00:09:29.683 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:29.683 Cannot find device "nvmf_init_if2" 00:09:29.683 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # true 00:09:29.683 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:29.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:29.683 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # true 00:09:29.683 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:29.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:29.683 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # true 00:09:29.683 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:29.683 17:08:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:29.683 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:29.683 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:29.683 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:29.683 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:29.683 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:29.683 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:29.683 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:29.683 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:29.683 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:29.683 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:29.683 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:29.683 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:29.683 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:29.683 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:29.683 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:29.683 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:29.683 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:29.683 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:29.683 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:29.683 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:29.683 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:29.683 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:29.683 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:29.683 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:29.941 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:29.942 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:29.942 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:09:29.942 00:09:29.942 --- 10.0.0.3 ping statistics --- 00:09:29.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.942 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:29.942 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:29.942 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:09:29.942 00:09:29.942 --- 10.0.0.4 ping statistics --- 00:09:29.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.942 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:29.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:09:29.942 00:09:29.942 --- 10.0.0.1 ping statistics --- 00:09:29.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.942 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:29.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:09:29.942 00:09:29.942 --- 10.0.0.2 ping statistics --- 00:09:29.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.942 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@461 -- # return 0 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=71168 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 71168 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 71168 ']' 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.942 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:09:31.320 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:41.292 Initializing NVMe Controllers 00:09:41.292 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:41.292 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:41.292 Initialization complete. Launching workers. 00:09:41.292 ======================================================== 00:09:41.292 Latency(us) 00:09:41.292 Device Information : IOPS MiB/s Average min max 00:09:41.292 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14738.99 57.57 4343.20 826.21 26281.79 00:09:41.292 ======================================================== 00:09:41.292 Total : 14738.99 57.57 4343.20 826.21 26281.79 00:09:41.292 00:09:41.551 17:08:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:41.551 17:08:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:41.551 17:08:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:41.551 17:08:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:41.551 17:08:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:41.551 17:08:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:41.551 17:08:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:41.551 17:08:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:41.551 rmmod nvme_tcp 00:09:41.551 rmmod nvme_fabrics 00:09:41.551 rmmod nvme_keyring 00:09:41.551 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:41.551 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:41.551 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:41.551 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 71168 ']' 00:09:41.551 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 71168 00:09:41.551 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 71168 ']' 00:09:41.551 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 71168 00:09:41.551 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:41.551 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.551 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71168 00:09:41.551 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:41.551 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:41.551 killing process with pid 71168 00:09:41.551 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71168' 00:09:41.551 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 71168 00:09:41.551 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 71168 00:09:42.118 nvmf threads initialize successfully 00:09:42.118 bdev subsystem init successfully 00:09:42.118 created a nvmf target service 00:09:42.118 create targets's poll groups done 00:09:42.118 all subsystems of target started 00:09:42.118 nvmf target is running 00:09:42.118 all subsystems of target stopped 00:09:42.118 destroy targets's poll groups done 00:09:42.118 destroyed the nvmf target service 00:09:42.118 bdev subsystem finish successfully 00:09:42.118 nvmf threads destroy successfully 00:09:42.118 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:42.118 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:42.119 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:42.119 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:42.119 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:42.119 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:42.119 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:42.119 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:42.119 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:42.119 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:42.119 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:42.119 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:42.119 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:42.119 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:42.119 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:42.119 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:42.119 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:42.119 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:42.119 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:42.119 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:42.119 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:42.119 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:42.119 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:42.119 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.119 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.119 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.119 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@300 -- # return 0 00:09:42.119 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:42.119 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:42.119 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:42.377 00:09:42.377 real 0m13.061s 00:09:42.377 user 0m45.619s 00:09:42.377 sys 0m2.314s 00:09:42.377 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.377 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:42.377 ************************************ 00:09:42.377 END TEST nvmf_example 00:09:42.377 ************************************ 00:09:42.377 17:08:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:42.377 17:08:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:42.377 17:08:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.377 17:08:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:42.377 ************************************ 00:09:42.377 START TEST nvmf_filesystem 00:09:42.377 ************************************ 00:09:42.377 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:42.377 * Looking for test storage... 00:09:42.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:42.377 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:42.377 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:42.377 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:42.639 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:42.639 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.639 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:42.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.640 --rc genhtml_branch_coverage=1 00:09:42.640 --rc genhtml_function_coverage=1 00:09:42.640 --rc genhtml_legend=1 00:09:42.640 --rc geninfo_all_blocks=1 00:09:42.640 --rc geninfo_unexecuted_blocks=1 00:09:42.640 00:09:42.640 ' 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:42.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.640 --rc genhtml_branch_coverage=1 00:09:42.640 --rc genhtml_function_coverage=1 00:09:42.640 --rc genhtml_legend=1 00:09:42.640 --rc geninfo_all_blocks=1 00:09:42.640 --rc geninfo_unexecuted_blocks=1 00:09:42.640 00:09:42.640 ' 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:42.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.640 --rc genhtml_branch_coverage=1 00:09:42.640 --rc genhtml_function_coverage=1 00:09:42.640 --rc genhtml_legend=1 00:09:42.640 --rc geninfo_all_blocks=1 00:09:42.640 --rc geninfo_unexecuted_blocks=1 00:09:42.640 00:09:42.640 ' 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:42.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.640 --rc genhtml_branch_coverage=1 00:09:42.640 --rc genhtml_function_coverage=1 00:09:42.640 --rc genhtml_legend=1 00:09:42.640 --rc geninfo_all_blocks=1 00:09:42.640 --rc geninfo_unexecuted_blocks=1 00:09:42.640 00:09:42.640 ' 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:42.640 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=y 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=y 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:09:42.641 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:42.641 #define SPDK_CONFIG_H 00:09:42.641 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:42.641 #define SPDK_CONFIG_APPS 1 00:09:42.641 #define SPDK_CONFIG_ARCH native 00:09:42.641 #undef SPDK_CONFIG_ASAN 00:09:42.641 #define SPDK_CONFIG_AVAHI 1 00:09:42.641 #undef SPDK_CONFIG_CET 00:09:42.641 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:42.641 #define SPDK_CONFIG_COVERAGE 1 00:09:42.641 #define SPDK_CONFIG_CROSS_PREFIX 00:09:42.641 #undef SPDK_CONFIG_CRYPTO 00:09:42.641 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:42.641 #undef SPDK_CONFIG_CUSTOMOCF 00:09:42.641 #undef SPDK_CONFIG_DAOS 00:09:42.641 #define SPDK_CONFIG_DAOS_DIR 00:09:42.641 #define SPDK_CONFIG_DEBUG 1 00:09:42.641 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:42.641 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:09:42.641 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:42.641 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:42.641 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:42.641 #undef SPDK_CONFIG_DPDK_UADK 00:09:42.641 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:42.641 #define SPDK_CONFIG_EXAMPLES 1 00:09:42.641 #undef SPDK_CONFIG_FC 00:09:42.641 #define SPDK_CONFIG_FC_PATH 00:09:42.641 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:42.641 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:42.641 #define SPDK_CONFIG_FSDEV 1 00:09:42.641 #undef SPDK_CONFIG_FUSE 00:09:42.641 #undef SPDK_CONFIG_FUZZER 00:09:42.641 #define SPDK_CONFIG_FUZZER_LIB 00:09:42.641 #define SPDK_CONFIG_GOLANG 1 00:09:42.641 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:42.641 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:42.641 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:42.641 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:42.641 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:42.641 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:42.641 #undef SPDK_CONFIG_HAVE_LZ4 00:09:42.641 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:42.641 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:42.641 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:42.641 #define SPDK_CONFIG_IDXD 1 00:09:42.641 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:42.641 #undef SPDK_CONFIG_IPSEC_MB 00:09:42.641 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:42.641 #define SPDK_CONFIG_ISAL 1 00:09:42.641 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:42.641 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:42.641 #define SPDK_CONFIG_LIBDIR 00:09:42.641 #undef SPDK_CONFIG_LTO 00:09:42.641 #define SPDK_CONFIG_MAX_LCORES 128 00:09:42.641 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:42.641 #define SPDK_CONFIG_NVME_CUSE 1 00:09:42.641 #undef SPDK_CONFIG_OCF 00:09:42.641 #define SPDK_CONFIG_OCF_PATH 00:09:42.641 #define SPDK_CONFIG_OPENSSL_PATH 00:09:42.641 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:42.641 #define SPDK_CONFIG_PGO_DIR 00:09:42.641 #undef SPDK_CONFIG_PGO_USE 00:09:42.641 #define SPDK_CONFIG_PREFIX /usr/local 00:09:42.641 #undef SPDK_CONFIG_RAID5F 00:09:42.641 #undef SPDK_CONFIG_RBD 00:09:42.641 #define SPDK_CONFIG_RDMA 1 00:09:42.642 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:42.642 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:42.642 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:42.642 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:42.642 #define SPDK_CONFIG_SHARED 1 00:09:42.642 #undef SPDK_CONFIG_SMA 00:09:42.642 #define SPDK_CONFIG_TESTS 1 00:09:42.642 #undef SPDK_CONFIG_TSAN 00:09:42.642 #define SPDK_CONFIG_UBLK 1 00:09:42.642 #define SPDK_CONFIG_UBSAN 1 00:09:42.642 #undef SPDK_CONFIG_UNIT_TESTS 00:09:42.642 #undef SPDK_CONFIG_URING 00:09:42.642 #define SPDK_CONFIG_URING_PATH 00:09:42.642 #undef SPDK_CONFIG_URING_ZNS 00:09:42.642 #define SPDK_CONFIG_USDT 1 00:09:42.642 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:42.642 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:42.642 #undef SPDK_CONFIG_VFIO_USER 00:09:42.642 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:42.642 #define SPDK_CONFIG_VHOST 1 00:09:42.642 #define SPDK_CONFIG_VIRTIO 1 00:09:42.642 #undef SPDK_CONFIG_VTUNE 00:09:42.642 #define SPDK_CONFIG_VTUNE_DIR 00:09:42.642 #define SPDK_CONFIG_WERROR 1 00:09:42.642 #define SPDK_CONFIG_WPDK_DIR 00:09:42.642 #undef SPDK_CONFIG_XNVME 00:09:42.642 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:42.642 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:42.643 17:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:42.643 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 71446 ]] 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 71446 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.yqZcAg 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.yqZcAg/tests/target /tmp/spdk.yqZcAg 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:09:42.644 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13979394048 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5589606400 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6256398336 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=2486435840 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506575872 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=20140032 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13979394048 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5589606400 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6266298368 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266433536 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=135168 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=1253273600 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253285888 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt/output 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=96040771584 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=3662008320 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:42.645 * Looking for test storage... 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/home 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=13979394048 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:42.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.645 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:42.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.646 --rc genhtml_branch_coverage=1 00:09:42.646 --rc genhtml_function_coverage=1 00:09:42.646 --rc genhtml_legend=1 00:09:42.646 --rc geninfo_all_blocks=1 00:09:42.646 --rc geninfo_unexecuted_blocks=1 00:09:42.646 00:09:42.646 ' 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:42.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.646 --rc genhtml_branch_coverage=1 00:09:42.646 --rc genhtml_function_coverage=1 00:09:42.646 --rc genhtml_legend=1 00:09:42.646 --rc geninfo_all_blocks=1 00:09:42.646 --rc geninfo_unexecuted_blocks=1 00:09:42.646 00:09:42.646 ' 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:42.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.646 --rc genhtml_branch_coverage=1 00:09:42.646 --rc genhtml_function_coverage=1 00:09:42.646 --rc genhtml_legend=1 00:09:42.646 --rc geninfo_all_blocks=1 00:09:42.646 --rc geninfo_unexecuted_blocks=1 00:09:42.646 00:09:42.646 ' 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:42.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.646 --rc genhtml_branch_coverage=1 00:09:42.646 --rc genhtml_function_coverage=1 00:09:42.646 --rc genhtml_legend=1 00:09:42.646 --rc geninfo_all_blocks=1 00:09:42.646 --rc geninfo_unexecuted_blocks=1 00:09:42.646 00:09:42.646 ' 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:42.646 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:42.906 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:42.906 Cannot find device "nvmf_init_br" 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:42.906 Cannot find device "nvmf_init_br2" 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:42.906 Cannot find device "nvmf_tgt_br" 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # true 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:42.906 Cannot find device "nvmf_tgt_br2" 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # true 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:42.906 Cannot find device "nvmf_init_br" 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # true 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:42.906 Cannot find device "nvmf_init_br2" 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # true 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:42.906 Cannot find device "nvmf_tgt_br" 00:09:42.906 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # true 00:09:42.907 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:42.907 Cannot find device "nvmf_tgt_br2" 00:09:42.907 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # true 00:09:42.907 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:42.907 Cannot find device "nvmf_br" 00:09:42.907 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # true 00:09:42.907 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:42.907 Cannot find device "nvmf_init_if" 00:09:42.907 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # true 00:09:42.907 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:42.907 Cannot find device "nvmf_init_if2" 00:09:42.907 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # true 00:09:42.907 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:42.907 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:42.907 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # true 00:09:42.907 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:42.907 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:42.907 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # true 00:09:42.907 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:42.907 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:42.907 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:42.907 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:42.907 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:42.907 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:43.166 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:43.166 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.146 ms 00:09:43.166 00:09:43.166 --- 10.0.0.3 ping statistics --- 00:09:43.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.166 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:43.166 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:43.166 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:09:43.166 00:09:43.166 --- 10.0.0.4 ping statistics --- 00:09:43.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.166 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:43.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:09:43.166 00:09:43.166 --- 10.0.0.1 ping statistics --- 00:09:43.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.166 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:43.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:09:43.166 00:09:43.166 --- 10.0.0.2 ping statistics --- 00:09:43.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.166 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@461 -- # return 0 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:43.166 ************************************ 00:09:43.166 START TEST nvmf_filesystem_no_in_capsule 00:09:43.166 ************************************ 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=71631 00:09:43.166 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 71631 00:09:43.167 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:43.167 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 71631 ']' 00:09:43.167 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.167 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.167 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.167 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.167 17:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:43.426 [2024-11-19 17:08:35.736383] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:09:43.426 [2024-11-19 17:08:35.736503] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.426 [2024-11-19 17:08:35.890324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:43.685 [2024-11-19 17:08:35.965117] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.685 [2024-11-19 17:08:35.965228] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.685 [2024-11-19 17:08:35.965255] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.685 [2024-11-19 17:08:35.965267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.685 [2024-11-19 17:08:35.965279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.686 [2024-11-19 17:08:35.966990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.686 [2024-11-19 17:08:35.967165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:43.686 [2024-11-19 17:08:35.967603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.686 [2024-11-19 17:08:35.967612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.686 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.686 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:43.686 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:43.686 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:43.686 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:43.686 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.686 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:43.686 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:43.686 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.686 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:43.686 [2024-11-19 17:08:36.193901] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.686 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.686 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:43.686 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.686 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:43.945 Malloc1 00:09:43.945 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.945 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:43.945 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.945 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:43.945 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.945 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:43.945 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.945 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:43.945 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.945 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:43.945 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.945 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:43.945 [2024-11-19 17:08:36.445820] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:43.945 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.945 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:43.945 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:43.945 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:43.945 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:43.945 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:43.945 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:43.945 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.945 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:43.945 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.945 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:43.945 { 00:09:43.945 "aliases": [ 00:09:43.945 "61d991ef-ed75-4796-b692-f3e283642b34" 00:09:43.945 ], 00:09:43.945 "assigned_rate_limits": { 00:09:43.945 "r_mbytes_per_sec": 0, 00:09:43.945 "rw_ios_per_sec": 0, 00:09:43.945 "rw_mbytes_per_sec": 0, 00:09:43.945 "w_mbytes_per_sec": 0 00:09:43.945 }, 00:09:43.945 "block_size": 512, 00:09:43.945 "claim_type": "exclusive_write", 00:09:43.945 "claimed": true, 00:09:43.945 "driver_specific": {}, 00:09:43.945 "memory_domains": [ 00:09:43.945 { 00:09:43.945 "dma_device_id": "system", 00:09:43.945 "dma_device_type": 1 00:09:43.945 }, 00:09:43.945 { 00:09:43.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.945 "dma_device_type": 2 00:09:43.945 } 00:09:43.945 ], 00:09:43.945 "name": "Malloc1", 00:09:43.945 "num_blocks": 1048576, 00:09:43.945 "product_name": "Malloc disk", 00:09:43.945 "supported_io_types": { 00:09:43.945 "abort": true, 00:09:43.945 "compare": false, 00:09:43.945 "compare_and_write": false, 00:09:43.945 "copy": true, 00:09:43.945 "flush": true, 00:09:43.945 "get_zone_info": false, 00:09:43.945 "nvme_admin": false, 00:09:43.945 "nvme_io": false, 00:09:43.945 "nvme_io_md": false, 00:09:43.945 "nvme_iov_md": false, 00:09:43.945 "read": true, 00:09:43.945 "reset": true, 00:09:43.945 "seek_data": false, 00:09:43.945 "seek_hole": false, 00:09:43.945 "unmap": true, 00:09:43.945 "write": true, 00:09:43.945 "write_zeroes": true, 00:09:43.945 "zcopy": true, 00:09:43.945 "zone_append": false, 00:09:43.945 "zone_management": false 00:09:43.945 }, 00:09:43.945 "uuid": "61d991ef-ed75-4796-b692-f3e283642b34", 00:09:43.945 "zoned": false 00:09:43.945 } 00:09:43.945 ]' 00:09:43.945 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:44.204 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:44.204 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:44.204 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:44.204 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:44.204 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:44.204 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:44.204 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:44.462 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:44.462 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:44.462 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:44.462 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:44.462 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:46.365 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:46.365 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:46.365 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:46.365 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:46.365 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:46.365 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:46.365 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:46.365 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:46.365 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:46.365 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:46.365 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:46.365 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:46.365 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:46.365 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:46.365 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:46.365 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:46.365 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:46.365 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:46.623 17:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:47.558 17:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:47.558 17:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:47.558 17:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:47.558 17:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.558 17:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.558 ************************************ 00:09:47.558 START TEST filesystem_ext4 00:09:47.558 ************************************ 00:09:47.558 17:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:47.558 17:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:47.558 17:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:47.558 17:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:47.558 17:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:47.558 17:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:47.558 17:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:47.558 17:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:47.558 17:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:47.558 17:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:47.558 17:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:47.558 mke2fs 1.47.0 (5-Feb-2023) 00:09:47.818 Discarding device blocks: 0/522240 done 00:09:47.818 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:47.818 Filesystem UUID: a59766b5-1561-4eea-a808-db14f5f079d6 00:09:47.818 Superblock backups stored on blocks: 00:09:47.818 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:47.818 00:09:47.818 Allocating group tables: 0/64 done 00:09:47.818 Writing inode tables: 0/64 done 00:09:47.818 Creating journal (8192 blocks): done 00:09:47.818 Writing superblocks and filesystem accounting information: 0/64 done 00:09:47.818 00:09:47.818 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:47.818 17:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:53.088 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:53.088 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:53.088 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:53.346 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:53.346 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:53.346 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:53.346 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 71631 00:09:53.346 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:53.346 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:53.346 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:53.346 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:53.346 ************************************ 00:09:53.346 END TEST filesystem_ext4 00:09:53.346 ************************************ 00:09:53.346 00:09:53.346 real 0m5.685s 00:09:53.346 user 0m0.028s 00:09:53.346 sys 0m0.071s 00:09:53.346 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.346 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:53.346 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:53.346 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:53.346 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.346 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:53.347 ************************************ 00:09:53.347 START TEST filesystem_btrfs 00:09:53.347 ************************************ 00:09:53.347 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:53.347 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:53.347 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:53.347 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:53.347 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:09:53.347 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:53.347 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:09:53.347 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:09:53.347 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:09:53.347 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:09:53.347 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:53.606 btrfs-progs v6.8.1 00:09:53.606 See https://btrfs.readthedocs.io for more information. 00:09:53.606 00:09:53.606 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:53.606 NOTE: several default settings have changed in version 5.15, please make sure 00:09:53.606 this does not affect your deployments: 00:09:53.606 - DUP for metadata (-m dup) 00:09:53.606 - enabled no-holes (-O no-holes) 00:09:53.606 - enabled free-space-tree (-R free-space-tree) 00:09:53.606 00:09:53.606 Label: (null) 00:09:53.606 UUID: 13f0e300-2e98-441c-a185-e14aa4ac2959 00:09:53.606 Node size: 16384 00:09:53.606 Sector size: 4096 (CPU page size: 4096) 00:09:53.606 Filesystem size: 510.00MiB 00:09:53.606 Block group profiles: 00:09:53.606 Data: single 8.00MiB 00:09:53.606 Metadata: DUP 32.00MiB 00:09:53.606 System: DUP 8.00MiB 00:09:53.606 SSD detected: yes 00:09:53.606 Zoned device: no 00:09:53.606 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:53.606 Checksum: crc32c 00:09:53.606 Number of devices: 1 00:09:53.606 Devices: 00:09:53.606 ID SIZE PATH 00:09:53.606 1 510.00MiB /dev/nvme0n1p1 00:09:53.606 00:09:53.606 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:09:53.606 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:53.606 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:53.606 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:53.606 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:53.606 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:53.606 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:53.606 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:53.606 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 71631 00:09:53.606 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:53.606 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:53.606 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:53.606 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:53.606 ************************************ 00:09:53.606 END TEST filesystem_btrfs 00:09:53.606 ************************************ 00:09:53.606 00:09:53.606 real 0m0.355s 00:09:53.606 user 0m0.027s 00:09:53.606 sys 0m0.061s 00:09:53.606 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.606 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:53.606 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:53.606 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:53.606 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.606 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:53.606 ************************************ 00:09:53.606 START TEST filesystem_xfs 00:09:53.606 ************************************ 00:09:53.606 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:09:53.606 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:53.606 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:53.606 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:53.606 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:09:53.607 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:53.607 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:09:53.607 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:09:53.607 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:09:53.607 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:09:53.607 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:53.865 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:53.865 = sectsz=512 attr=2, projid32bit=1 00:09:53.865 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:53.865 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:53.865 data = bsize=4096 blocks=130560, imaxpct=25 00:09:53.865 = sunit=0 swidth=0 blks 00:09:53.865 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:53.865 log =internal log bsize=4096 blocks=16384, version=2 00:09:53.865 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:53.865 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:54.432 Discarding blocks...Done. 00:09:54.432 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:09:54.432 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 71631 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:56.968 ************************************ 00:09:56.968 END TEST filesystem_xfs 00:09:56.968 ************************************ 00:09:56.968 00:09:56.968 real 0m3.175s 00:09:56.968 user 0m0.029s 00:09:56.968 sys 0m0.053s 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:56.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 71631 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 71631 ']' 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 71631 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71631 00:09:56.968 killing process with pid 71631 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71631' 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 71631 00:09:56.968 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 71631 00:09:57.536 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:57.536 00:09:57.536 real 0m14.377s 00:09:57.536 user 0m54.964s 00:09:57.536 sys 0m1.813s 00:09:57.536 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.536 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:57.536 ************************************ 00:09:57.536 END TEST nvmf_filesystem_no_in_capsule 00:09:57.536 ************************************ 00:09:57.794 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:57.794 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:57.794 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.794 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:57.794 ************************************ 00:09:57.794 START TEST nvmf_filesystem_in_capsule 00:09:57.794 ************************************ 00:09:57.794 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:09:57.794 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:57.795 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:57.795 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:57.795 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:57.795 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:57.795 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=71991 00:09:57.795 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 71991 00:09:57.795 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 71991 ']' 00:09:57.795 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:57.795 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.795 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.795 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.795 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.795 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:57.795 [2024-11-19 17:08:50.171507] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:09:57.795 [2024-11-19 17:08:50.171628] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.795 [2024-11-19 17:08:50.321185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:58.053 [2024-11-19 17:08:50.373553] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.053 [2024-11-19 17:08:50.373674] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.053 [2024-11-19 17:08:50.373687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:58.053 [2024-11-19 17:08:50.373696] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:58.053 [2024-11-19 17:08:50.373703] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.053 [2024-11-19 17:08:50.375296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.053 [2024-11-19 17:08:50.375471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:58.053 [2024-11-19 17:08:50.375628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:58.053 [2024-11-19 17:08:50.375633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.053 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.053 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:58.053 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:58.053 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:58.053 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:58.053 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.053 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:58.053 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:58.053 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.053 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:58.053 [2024-11-19 17:08:50.579168] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:58.312 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.312 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:58.312 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.312 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:58.312 Malloc1 00:09:58.312 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.312 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:58.312 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.312 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:58.312 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.312 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:58.312 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.312 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:58.312 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.312 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:58.312 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.312 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:58.312 [2024-11-19 17:08:50.820564] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:58.312 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.312 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:58.312 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:58.312 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:58.312 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:58.312 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:58.312 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:58.312 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.312 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:58.571 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.571 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:58.571 { 00:09:58.571 "aliases": [ 00:09:58.571 "e0bb9d73-8dae-406c-a477-0eb6e8ad1d01" 00:09:58.571 ], 00:09:58.571 "assigned_rate_limits": { 00:09:58.571 "r_mbytes_per_sec": 0, 00:09:58.571 "rw_ios_per_sec": 0, 00:09:58.571 "rw_mbytes_per_sec": 0, 00:09:58.571 "w_mbytes_per_sec": 0 00:09:58.571 }, 00:09:58.571 "block_size": 512, 00:09:58.571 "claim_type": "exclusive_write", 00:09:58.571 "claimed": true, 00:09:58.571 "driver_specific": {}, 00:09:58.571 "memory_domains": [ 00:09:58.571 { 00:09:58.572 "dma_device_id": "system", 00:09:58.572 "dma_device_type": 1 00:09:58.572 }, 00:09:58.572 { 00:09:58.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.572 "dma_device_type": 2 00:09:58.572 } 00:09:58.572 ], 00:09:58.572 "name": "Malloc1", 00:09:58.572 "num_blocks": 1048576, 00:09:58.572 "product_name": "Malloc disk", 00:09:58.572 "supported_io_types": { 00:09:58.572 "abort": true, 00:09:58.572 "compare": false, 00:09:58.572 "compare_and_write": false, 00:09:58.572 "copy": true, 00:09:58.572 "flush": true, 00:09:58.572 "get_zone_info": false, 00:09:58.572 "nvme_admin": false, 00:09:58.572 "nvme_io": false, 00:09:58.572 "nvme_io_md": false, 00:09:58.572 "nvme_iov_md": false, 00:09:58.572 "read": true, 00:09:58.572 "reset": true, 00:09:58.572 "seek_data": false, 00:09:58.572 "seek_hole": false, 00:09:58.572 "unmap": true, 00:09:58.572 "write": true, 00:09:58.572 "write_zeroes": true, 00:09:58.572 "zcopy": true, 00:09:58.572 "zone_append": false, 00:09:58.572 "zone_management": false 00:09:58.572 }, 00:09:58.572 "uuid": "e0bb9d73-8dae-406c-a477-0eb6e8ad1d01", 00:09:58.572 "zoned": false 00:09:58.572 } 00:09:58.572 ]' 00:09:58.572 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:58.572 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:58.572 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:58.572 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:58.572 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:58.572 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:58.572 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:58.572 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:58.830 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:58.831 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:58.831 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:58.831 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:58.831 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:00.738 17:08:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:00.738 17:08:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:00.738 17:08:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:00.738 17:08:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:00.738 17:08:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:00.738 17:08:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:00.738 17:08:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:00.738 17:08:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:00.738 17:08:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:00.738 17:08:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:00.738 17:08:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:00.738 17:08:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:00.738 17:08:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:00.738 17:08:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:00.738 17:08:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:00.738 17:08:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:00.738 17:08:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:00.738 17:08:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:00.997 17:08:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:01.933 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:01.933 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:01.933 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:01.933 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.933 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:01.933 ************************************ 00:10:01.933 START TEST filesystem_in_capsule_ext4 00:10:01.933 ************************************ 00:10:01.933 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:01.933 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:01.933 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:01.933 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:01.933 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:01.933 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:01.933 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:01.933 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:01.933 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:01.933 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:01.933 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:01.933 mke2fs 1.47.0 (5-Feb-2023) 00:10:02.191 Discarding device blocks: 0/522240 done 00:10:02.191 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:02.191 Filesystem UUID: 473fbf81-5029-402a-a175-709056393ca6 00:10:02.191 Superblock backups stored on blocks: 00:10:02.191 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:02.191 00:10:02.191 Allocating group tables: 0/64 done 00:10:02.191 Writing inode tables: 0/64 done 00:10:02.191 Creating journal (8192 blocks): done 00:10:02.191 Writing superblocks and filesystem accounting information: 0/64 done 00:10:02.191 00:10:02.191 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:02.191 17:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:07.460 17:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:07.460 17:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:07.460 17:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:07.460 17:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:07.460 17:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:07.460 17:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:07.460 17:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 71991 00:10:07.460 17:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:07.460 17:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:07.460 17:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:07.460 17:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:07.460 00:10:07.460 real 0m5.604s 00:10:07.460 user 0m0.030s 00:10:07.460 sys 0m0.058s 00:10:07.460 17:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.460 17:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:07.460 ************************************ 00:10:07.460 END TEST filesystem_in_capsule_ext4 00:10:07.460 ************************************ 00:10:07.460 17:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:07.460 17:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:07.460 17:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.460 17:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:07.460 ************************************ 00:10:07.460 START TEST filesystem_in_capsule_btrfs 00:10:07.460 ************************************ 00:10:07.460 17:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:07.460 17:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:07.460 17:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:07.460 17:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:07.460 17:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:07.460 17:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:07.460 17:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:07.460 17:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:07.460 17:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:07.460 17:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:07.460 17:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:07.718 btrfs-progs v6.8.1 00:10:07.718 See https://btrfs.readthedocs.io for more information. 00:10:07.718 00:10:07.718 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:07.718 NOTE: several default settings have changed in version 5.15, please make sure 00:10:07.718 this does not affect your deployments: 00:10:07.718 - DUP for metadata (-m dup) 00:10:07.718 - enabled no-holes (-O no-holes) 00:10:07.718 - enabled free-space-tree (-R free-space-tree) 00:10:07.718 00:10:07.718 Label: (null) 00:10:07.718 UUID: 6ae06b6d-350f-42be-b1de-2d2748236d10 00:10:07.718 Node size: 16384 00:10:07.718 Sector size: 4096 (CPU page size: 4096) 00:10:07.719 Filesystem size: 510.00MiB 00:10:07.719 Block group profiles: 00:10:07.719 Data: single 8.00MiB 00:10:07.719 Metadata: DUP 32.00MiB 00:10:07.719 System: DUP 8.00MiB 00:10:07.719 SSD detected: yes 00:10:07.719 Zoned device: no 00:10:07.719 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:07.719 Checksum: crc32c 00:10:07.719 Number of devices: 1 00:10:07.719 Devices: 00:10:07.719 ID SIZE PATH 00:10:07.719 1 510.00MiB /dev/nvme0n1p1 00:10:07.719 00:10:07.719 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:07.719 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:07.719 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:07.719 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:07.719 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:07.978 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:07.978 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:07.978 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:07.978 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 71991 00:10:07.978 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:07.978 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:07.978 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:07.978 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:07.978 00:10:07.978 real 0m0.307s 00:10:07.978 user 0m0.021s 00:10:07.978 sys 0m0.060s 00:10:07.978 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.978 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:07.978 ************************************ 00:10:07.978 END TEST filesystem_in_capsule_btrfs 00:10:07.978 ************************************ 00:10:07.978 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:07.978 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:07.978 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.978 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:07.978 ************************************ 00:10:07.978 START TEST filesystem_in_capsule_xfs 00:10:07.978 ************************************ 00:10:07.978 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:07.978 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:07.978 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:07.978 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:07.978 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:07.978 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:07.978 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:07.978 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:07.978 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:07.978 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:07.978 17:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:07.978 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:07.978 = sectsz=512 attr=2, projid32bit=1 00:10:07.978 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:07.978 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:07.978 data = bsize=4096 blocks=130560, imaxpct=25 00:10:07.978 = sunit=0 swidth=0 blks 00:10:07.978 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:07.978 log =internal log bsize=4096 blocks=16384, version=2 00:10:07.978 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:07.978 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:08.914 Discarding blocks...Done. 00:10:08.914 17:09:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:08.914 17:09:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:10.895 17:09:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:10.895 17:09:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:10.895 17:09:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:10.895 17:09:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:10.895 17:09:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:10.895 17:09:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:10.895 17:09:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 71991 00:10:10.895 17:09:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:10.895 17:09:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:10.895 00:10:10.895 real 0m2.690s 00:10:10.895 user 0m0.033s 00:10:10.895 sys 0m0.046s 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:10.895 ************************************ 00:10:10.895 END TEST filesystem_in_capsule_xfs 00:10:10.895 ************************************ 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:10.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 71991 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 71991 ']' 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 71991 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71991 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71991' 00:10:10.895 killing process with pid 71991 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 71991 00:10:10.895 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 71991 00:10:11.484 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:11.484 00:10:11.484 real 0m13.683s 00:10:11.484 user 0m52.377s 00:10:11.484 sys 0m1.733s 00:10:11.484 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.484 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.484 ************************************ 00:10:11.484 END TEST nvmf_filesystem_in_capsule 00:10:11.484 ************************************ 00:10:11.484 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:11.484 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:11.484 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:11.484 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:11.484 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:11.484 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:11.484 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:11.484 rmmod nvme_tcp 00:10:11.484 rmmod nvme_fabrics 00:10:11.484 rmmod nvme_keyring 00:10:11.484 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:11.484 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:11.484 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:11.484 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:11.484 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:11.484 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:11.484 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:11.484 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:11.484 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:11.484 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:11.484 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:11.484 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:11.484 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:11.484 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:11.484 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:11.484 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:11.484 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:11.484 17:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:11.484 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:11.484 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:11.484 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:11.742 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:11.742 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:11.742 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:11.742 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:11.742 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:11.742 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:11.742 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.742 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.742 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.742 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@300 -- # return 0 00:10:11.742 00:10:11.742 real 0m29.435s 00:10:11.742 user 1m47.807s 00:10:11.742 sys 0m4.121s 00:10:11.742 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.742 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:11.742 ************************************ 00:10:11.742 END TEST nvmf_filesystem 00:10:11.742 ************************************ 00:10:11.742 17:09:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:11.742 17:09:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:11.742 17:09:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.742 17:09:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:11.742 ************************************ 00:10:11.742 START TEST nvmf_target_discovery 00:10:11.742 ************************************ 00:10:11.742 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:12.001 * Looking for test storage... 00:10:12.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:12.001 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:12.001 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:12.001 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:10:12.001 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:12.001 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.001 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.001 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.001 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.001 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.001 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.001 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.001 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:12.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.002 --rc genhtml_branch_coverage=1 00:10:12.002 --rc genhtml_function_coverage=1 00:10:12.002 --rc genhtml_legend=1 00:10:12.002 --rc geninfo_all_blocks=1 00:10:12.002 --rc geninfo_unexecuted_blocks=1 00:10:12.002 00:10:12.002 ' 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:12.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.002 --rc genhtml_branch_coverage=1 00:10:12.002 --rc genhtml_function_coverage=1 00:10:12.002 --rc genhtml_legend=1 00:10:12.002 --rc geninfo_all_blocks=1 00:10:12.002 --rc geninfo_unexecuted_blocks=1 00:10:12.002 00:10:12.002 ' 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:12.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.002 --rc genhtml_branch_coverage=1 00:10:12.002 --rc genhtml_function_coverage=1 00:10:12.002 --rc genhtml_legend=1 00:10:12.002 --rc geninfo_all_blocks=1 00:10:12.002 --rc geninfo_unexecuted_blocks=1 00:10:12.002 00:10:12.002 ' 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:12.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.002 --rc genhtml_branch_coverage=1 00:10:12.002 --rc genhtml_function_coverage=1 00:10:12.002 --rc genhtml_legend=1 00:10:12.002 --rc geninfo_all_blocks=1 00:10:12.002 --rc geninfo_unexecuted_blocks=1 00:10:12.002 00:10:12.002 ' 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:12.002 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:12.002 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:12.003 Cannot find device "nvmf_init_br" 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:12.003 Cannot find device "nvmf_init_br2" 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:12.003 Cannot find device "nvmf_tgt_br" 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # true 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:12.003 Cannot find device "nvmf_tgt_br2" 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # true 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:12.003 Cannot find device "nvmf_init_br" 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # true 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:12.003 Cannot find device "nvmf_init_br2" 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # true 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:12.003 Cannot find device "nvmf_tgt_br" 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # true 00:10:12.003 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:12.261 Cannot find device "nvmf_tgt_br2" 00:10:12.261 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # true 00:10:12.261 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:12.261 Cannot find device "nvmf_br" 00:10:12.261 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # true 00:10:12.261 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:12.261 Cannot find device "nvmf_init_if" 00:10:12.261 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # true 00:10:12.261 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:12.261 Cannot find device "nvmf_init_if2" 00:10:12.261 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # true 00:10:12.261 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:12.261 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:12.261 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # true 00:10:12.261 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:12.261 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:12.261 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # true 00:10:12.261 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:12.261 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:12.261 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:12.261 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:12.261 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:12.261 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:12.261 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:12.262 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:12.262 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:12.262 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:12.262 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:12.262 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:12.262 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:12.262 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:12.262 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:12.262 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:12.262 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:12.262 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:12.262 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:12.262 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:12.262 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:12.262 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:12.262 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:12.262 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:12.262 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:12.262 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:12.262 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:12.262 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:12.262 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:12.262 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:12.262 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:12.262 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:12.262 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:12.262 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:12.262 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:10:12.262 00:10:12.262 --- 10.0.0.3 ping statistics --- 00:10:12.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.262 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:10:12.262 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:12.262 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:12.262 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:10:12.262 00:10:12.262 --- 10.0.0.4 ping statistics --- 00:10:12.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.262 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:10:12.262 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:12.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:10:12.520 00:10:12.520 --- 10.0.0.1 ping statistics --- 00:10:12.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.520 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:10:12.520 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:12.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:10:12.520 00:10:12.520 --- 10.0.0.2 ping statistics --- 00:10:12.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.520 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:10:12.520 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.520 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@461 -- # return 0 00:10:12.520 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:12.520 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.520 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:12.520 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:12.520 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.520 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:12.520 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:12.520 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:12.520 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:12.520 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:12.520 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:12.520 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=72573 00:10:12.520 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:12.520 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 72573 00:10:12.520 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 72573 ']' 00:10:12.520 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.520 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:12.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.520 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.520 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:12.520 17:09:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:12.520 [2024-11-19 17:09:04.901716] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:10:12.520 [2024-11-19 17:09:04.901841] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.779 [2024-11-19 17:09:05.058308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.779 [2024-11-19 17:09:05.132689] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.779 [2024-11-19 17:09:05.132808] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.779 [2024-11-19 17:09:05.132826] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.779 [2024-11-19 17:09:05.132837] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.779 [2024-11-19 17:09:05.132848] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.779 [2024-11-19 17:09:05.134480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.779 [2024-11-19 17:09:05.134648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.779 [2024-11-19 17:09:05.135164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.779 [2024-11-19 17:09:05.135209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.779 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.779 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:12.779 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:12.779 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:12.779 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.037 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.037 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:13.037 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.037 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.037 [2024-11-19 17:09:05.352858] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.037 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.037 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:13.037 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.038 Null1 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.038 [2024-11-19 17:09:05.398031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.038 Null2 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.038 Null3 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.038 Null4 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.3 -s 4430 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.038 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -a 10.0.0.3 -s 4420 00:10:13.298 00:10:13.298 Discovery Log Number of Records 6, Generation counter 6 00:10:13.298 =====Discovery Log Entry 0====== 00:10:13.298 trtype: tcp 00:10:13.298 adrfam: ipv4 00:10:13.298 subtype: current discovery subsystem 00:10:13.298 treq: not required 00:10:13.298 portid: 0 00:10:13.298 trsvcid: 4420 00:10:13.298 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:13.298 traddr: 10.0.0.3 00:10:13.298 eflags: explicit discovery connections, duplicate discovery information 00:10:13.298 sectype: none 00:10:13.298 =====Discovery Log Entry 1====== 00:10:13.298 trtype: tcp 00:10:13.298 adrfam: ipv4 00:10:13.298 subtype: nvme subsystem 00:10:13.298 treq: not required 00:10:13.298 portid: 0 00:10:13.298 trsvcid: 4420 00:10:13.298 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:13.298 traddr: 10.0.0.3 00:10:13.298 eflags: none 00:10:13.298 sectype: none 00:10:13.298 =====Discovery Log Entry 2====== 00:10:13.298 trtype: tcp 00:10:13.298 adrfam: ipv4 00:10:13.298 subtype: nvme subsystem 00:10:13.298 treq: not required 00:10:13.298 portid: 0 00:10:13.298 trsvcid: 4420 00:10:13.298 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:13.298 traddr: 10.0.0.3 00:10:13.298 eflags: none 00:10:13.298 sectype: none 00:10:13.298 =====Discovery Log Entry 3====== 00:10:13.298 trtype: tcp 00:10:13.298 adrfam: ipv4 00:10:13.298 subtype: nvme subsystem 00:10:13.298 treq: not required 00:10:13.298 portid: 0 00:10:13.298 trsvcid: 4420 00:10:13.298 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:13.298 traddr: 10.0.0.3 00:10:13.298 eflags: none 00:10:13.298 sectype: none 00:10:13.298 =====Discovery Log Entry 4====== 00:10:13.298 trtype: tcp 00:10:13.298 adrfam: ipv4 00:10:13.298 subtype: nvme subsystem 00:10:13.298 treq: not required 00:10:13.298 portid: 0 00:10:13.298 trsvcid: 4420 00:10:13.298 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:13.298 traddr: 10.0.0.3 00:10:13.298 eflags: none 00:10:13.298 sectype: none 00:10:13.298 =====Discovery Log Entry 5====== 00:10:13.298 trtype: tcp 00:10:13.298 adrfam: ipv4 00:10:13.298 subtype: discovery subsystem referral 00:10:13.298 treq: not required 00:10:13.298 portid: 0 00:10:13.298 trsvcid: 4430 00:10:13.298 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:13.298 traddr: 10.0.0.3 00:10:13.298 eflags: none 00:10:13.298 sectype: none 00:10:13.298 Perform nvmf subsystem discovery via RPC 00:10:13.298 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:13.298 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:13.298 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.298 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.298 [ 00:10:13.298 { 00:10:13.298 "allow_any_host": true, 00:10:13.298 "hosts": [], 00:10:13.298 "listen_addresses": [ 00:10:13.298 { 00:10:13.298 "adrfam": "IPv4", 00:10:13.298 "traddr": "10.0.0.3", 00:10:13.298 "trsvcid": "4420", 00:10:13.298 "trtype": "TCP" 00:10:13.298 } 00:10:13.298 ], 00:10:13.298 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:13.298 "subtype": "Discovery" 00:10:13.298 }, 00:10:13.298 { 00:10:13.298 "allow_any_host": true, 00:10:13.298 "hosts": [], 00:10:13.298 "listen_addresses": [ 00:10:13.298 { 00:10:13.298 "adrfam": "IPv4", 00:10:13.298 "traddr": "10.0.0.3", 00:10:13.298 "trsvcid": "4420", 00:10:13.298 "trtype": "TCP" 00:10:13.298 } 00:10:13.298 ], 00:10:13.298 "max_cntlid": 65519, 00:10:13.298 "max_namespaces": 32, 00:10:13.298 "min_cntlid": 1, 00:10:13.298 "model_number": "SPDK bdev Controller", 00:10:13.298 "namespaces": [ 00:10:13.298 { 00:10:13.298 "bdev_name": "Null1", 00:10:13.298 "name": "Null1", 00:10:13.298 "nguid": "13ABD114D63844E4A908015B9C811291", 00:10:13.298 "nsid": 1, 00:10:13.298 "uuid": "13abd114-d638-44e4-a908-015b9c811291" 00:10:13.298 } 00:10:13.298 ], 00:10:13.298 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:13.298 "serial_number": "SPDK00000000000001", 00:10:13.298 "subtype": "NVMe" 00:10:13.298 }, 00:10:13.298 { 00:10:13.298 "allow_any_host": true, 00:10:13.298 "hosts": [], 00:10:13.298 "listen_addresses": [ 00:10:13.298 { 00:10:13.298 "adrfam": "IPv4", 00:10:13.298 "traddr": "10.0.0.3", 00:10:13.298 "trsvcid": "4420", 00:10:13.298 "trtype": "TCP" 00:10:13.298 } 00:10:13.298 ], 00:10:13.298 "max_cntlid": 65519, 00:10:13.298 "max_namespaces": 32, 00:10:13.298 "min_cntlid": 1, 00:10:13.298 "model_number": "SPDK bdev Controller", 00:10:13.298 "namespaces": [ 00:10:13.298 { 00:10:13.298 "bdev_name": "Null2", 00:10:13.298 "name": "Null2", 00:10:13.298 "nguid": "C77AA762C40D441E9FC412B1A7B22BCC", 00:10:13.298 "nsid": 1, 00:10:13.298 "uuid": "c77aa762-c40d-441e-9fc4-12b1a7b22bcc" 00:10:13.298 } 00:10:13.298 ], 00:10:13.298 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:13.298 "serial_number": "SPDK00000000000002", 00:10:13.298 "subtype": "NVMe" 00:10:13.298 }, 00:10:13.298 { 00:10:13.298 "allow_any_host": true, 00:10:13.298 "hosts": [], 00:10:13.298 "listen_addresses": [ 00:10:13.298 { 00:10:13.298 "adrfam": "IPv4", 00:10:13.299 "traddr": "10.0.0.3", 00:10:13.299 "trsvcid": "4420", 00:10:13.299 "trtype": "TCP" 00:10:13.299 } 00:10:13.299 ], 00:10:13.299 "max_cntlid": 65519, 00:10:13.299 "max_namespaces": 32, 00:10:13.299 "min_cntlid": 1, 00:10:13.299 "model_number": "SPDK bdev Controller", 00:10:13.299 "namespaces": [ 00:10:13.299 { 00:10:13.299 "bdev_name": "Null3", 00:10:13.299 "name": "Null3", 00:10:13.299 "nguid": "3C875A0A10D44A66BBA6C0D1E4B265C1", 00:10:13.299 "nsid": 1, 00:10:13.299 "uuid": "3c875a0a-10d4-4a66-bba6-c0d1e4b265c1" 00:10:13.299 } 00:10:13.299 ], 00:10:13.299 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:13.299 "serial_number": "SPDK00000000000003", 00:10:13.299 "subtype": "NVMe" 00:10:13.299 }, 00:10:13.299 { 00:10:13.299 "allow_any_host": true, 00:10:13.299 "hosts": [], 00:10:13.299 "listen_addresses": [ 00:10:13.299 { 00:10:13.299 "adrfam": "IPv4", 00:10:13.299 "traddr": "10.0.0.3", 00:10:13.299 "trsvcid": "4420", 00:10:13.299 "trtype": "TCP" 00:10:13.299 } 00:10:13.299 ], 00:10:13.299 "max_cntlid": 65519, 00:10:13.299 "max_namespaces": 32, 00:10:13.299 "min_cntlid": 1, 00:10:13.299 "model_number": "SPDK bdev Controller", 00:10:13.299 "namespaces": [ 00:10:13.299 { 00:10:13.299 "bdev_name": "Null4", 00:10:13.299 "name": "Null4", 00:10:13.299 "nguid": "E977558F913749829BCC09E048E6527C", 00:10:13.299 "nsid": 1, 00:10:13.299 "uuid": "e977558f-9137-4982-9bcc-09e048e6527c" 00:10:13.299 } 00:10:13.299 ], 00:10:13.299 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:13.299 "serial_number": "SPDK00000000000004", 00:10:13.299 "subtype": "NVMe" 00:10:13.299 } 00:10:13.299 ] 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.3 -s 4430 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:13.299 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:13.558 rmmod nvme_tcp 00:10:13.558 rmmod nvme_fabrics 00:10:13.558 rmmod nvme_keyring 00:10:13.558 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:13.558 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:13.558 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:13.558 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 72573 ']' 00:10:13.558 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 72573 00:10:13.558 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 72573 ']' 00:10:13.558 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 72573 00:10:13.558 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:13.558 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.558 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72573 00:10:13.558 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.558 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.558 killing process with pid 72573 00:10:13.558 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72573' 00:10:13.558 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 72573 00:10:13.558 17:09:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 72573 00:10:13.816 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:13.816 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:13.816 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:13.816 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:13.816 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:13.816 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:13.816 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:13.816 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:13.816 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:13.816 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:13.816 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:13.816 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:13.816 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:13.816 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:13.816 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:13.816 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:13.816 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:13.816 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:13.816 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:13.816 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:14.074 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:14.074 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:14.074 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:14.074 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.074 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.074 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.074 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@300 -- # return 0 00:10:14.074 00:10:14.074 real 0m2.209s 00:10:14.074 user 0m4.382s 00:10:14.074 sys 0m0.770s 00:10:14.074 ************************************ 00:10:14.074 END TEST nvmf_target_discovery 00:10:14.074 ************************************ 00:10:14.074 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.074 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.074 17:09:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:14.074 17:09:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:14.074 17:09:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.074 17:09:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:14.074 ************************************ 00:10:14.074 START TEST nvmf_referrals 00:10:14.074 ************************************ 00:10:14.074 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:14.074 * Looking for test storage... 00:10:14.074 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:14.074 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:14.074 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:14.074 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.334 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:14.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.334 --rc genhtml_branch_coverage=1 00:10:14.334 --rc genhtml_function_coverage=1 00:10:14.334 --rc genhtml_legend=1 00:10:14.334 --rc geninfo_all_blocks=1 00:10:14.335 --rc geninfo_unexecuted_blocks=1 00:10:14.335 00:10:14.335 ' 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:14.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.335 --rc genhtml_branch_coverage=1 00:10:14.335 --rc genhtml_function_coverage=1 00:10:14.335 --rc genhtml_legend=1 00:10:14.335 --rc geninfo_all_blocks=1 00:10:14.335 --rc geninfo_unexecuted_blocks=1 00:10:14.335 00:10:14.335 ' 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:14.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.335 --rc genhtml_branch_coverage=1 00:10:14.335 --rc genhtml_function_coverage=1 00:10:14.335 --rc genhtml_legend=1 00:10:14.335 --rc geninfo_all_blocks=1 00:10:14.335 --rc geninfo_unexecuted_blocks=1 00:10:14.335 00:10:14.335 ' 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:14.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.335 --rc genhtml_branch_coverage=1 00:10:14.335 --rc genhtml_function_coverage=1 00:10:14.335 --rc genhtml_legend=1 00:10:14.335 --rc geninfo_all_blocks=1 00:10:14.335 --rc geninfo_unexecuted_blocks=1 00:10:14.335 00:10:14.335 ' 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:14.335 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:14.335 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:14.336 Cannot find device "nvmf_init_br" 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:14.336 Cannot find device "nvmf_init_br2" 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:14.336 Cannot find device "nvmf_tgt_br" 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # true 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:14.336 Cannot find device "nvmf_tgt_br2" 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # true 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:14.336 Cannot find device "nvmf_init_br" 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # true 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:14.336 Cannot find device "nvmf_init_br2" 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # true 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:14.336 Cannot find device "nvmf_tgt_br" 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # true 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:14.336 Cannot find device "nvmf_tgt_br2" 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # true 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:14.336 Cannot find device "nvmf_br" 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # true 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:14.336 Cannot find device "nvmf_init_if" 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # true 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:14.336 Cannot find device "nvmf_init_if2" 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # true 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:14.336 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # true 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:14.336 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # true 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:14.336 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:14.595 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:14.595 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:14.595 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:14.595 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:14.595 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:14.595 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:14.595 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:14.595 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:14.595 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:14.595 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:14.595 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:14.595 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:14.595 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:14.595 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:14.595 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:14.595 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:14.595 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:14.595 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:14.595 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:14.595 17:09:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:14.595 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:14.595 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:14.595 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:14.595 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:14.595 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:14.595 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:14.595 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:14.595 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:14.595 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:14.595 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:14.595 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:14.595 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:10:14.595 00:10:14.595 --- 10.0.0.3 ping statistics --- 00:10:14.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.595 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:10:14.595 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:14.595 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:14.595 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:10:14.595 00:10:14.595 --- 10.0.0.4 ping statistics --- 00:10:14.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.595 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:10:14.595 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:14.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:14.596 00:10:14.596 --- 10.0.0.1 ping statistics --- 00:10:14.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.596 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:14.596 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:14.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:10:14.596 00:10:14.596 --- 10.0.0.2 ping statistics --- 00:10:14.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.596 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:14.596 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.596 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@461 -- # return 0 00:10:14.596 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:14.596 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.596 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:14.596 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:14.596 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.596 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:14.596 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:14.855 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:14.855 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:14.855 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:14.855 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:14.855 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=72837 00:10:14.855 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:14.855 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 72837 00:10:14.855 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 72837 ']' 00:10:14.855 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.855 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.855 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.855 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.855 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:14.855 [2024-11-19 17:09:07.201281] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:10:14.855 [2024-11-19 17:09:07.201387] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.855 [2024-11-19 17:09:07.349585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:15.114 [2024-11-19 17:09:07.418362] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.114 [2024-11-19 17:09:07.418736] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.114 [2024-11-19 17:09:07.419258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.114 [2024-11-19 17:09:07.419465] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.114 [2024-11-19 17:09:07.419655] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.114 [2024-11-19 17:09:07.421298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.114 [2024-11-19 17:09:07.421464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.114 [2024-11-19 17:09:07.421668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.114 [2024-11-19 17:09:07.421682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.114 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.114 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:15.114 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:15.114 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:15.114 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:15.114 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.114 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:15.114 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.114 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:15.373 [2024-11-19 17:09:07.644794] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.3 -s 8009 discovery 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:15.373 [2024-11-19 17:09:07.657707] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:15.373 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.374 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:15.374 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:15.374 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:15.374 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:15.374 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:15.374 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -a 10.0.0.3 -s 8009 -o json 00:10:15.374 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:15.374 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:15.633 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:15.633 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:15.633 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:15.633 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.633 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:15.633 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.633 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:15.633 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.633 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:15.633 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.633 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:15.633 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.633 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:15.633 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.633 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:15.633 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.633 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:15.633 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:15.633 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.633 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:15.633 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:15.633 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:15.633 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:15.633 17:09:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -a 10.0.0.3 -s 8009 -o json 00:10:15.633 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:15.633 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:15.633 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:15.633 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:15.633 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:15.633 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.633 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:15.633 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.633 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:15.633 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.633 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:15.892 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.892 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:15.893 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:15.893 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:15.893 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.893 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:15.893 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:15.893 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:15.893 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.893 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:15.893 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:15.893 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:15.893 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:15.893 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:15.893 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:15.893 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -a 10.0.0.3 -s 8009 -o json 00:10:15.893 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:15.893 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:15.893 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:15.893 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:15.893 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:15.893 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:15.893 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -a 10.0.0.3 -s 8009 -o json 00:10:15.893 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:16.152 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:16.152 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:16.152 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:16.152 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:16.152 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:16.152 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -a 10.0.0.3 -s 8009 -o json 00:10:16.152 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:16.152 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:16.152 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.152 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:16.152 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.152 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:16.152 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:16.152 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:16.152 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:16.152 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.152 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:16.152 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:16.152 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.152 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:16.152 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:16.152 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:16.152 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:16.152 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:16.152 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -a 10.0.0.3 -s 8009 -o json 00:10:16.152 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:16.152 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:16.411 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:16.411 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:16.411 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:16.411 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:16.411 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:16.411 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -a 10.0.0.3 -s 8009 -o json 00:10:16.411 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:16.411 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:16.411 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:16.411 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:16.411 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:16.411 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:16.411 17:09:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -a 10.0.0.3 -s 8009 -o json 00:10:16.670 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:16.670 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:16.670 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.670 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:16.670 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.670 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:16.670 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:16.670 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.670 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:16.670 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.670 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:16.670 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:16.670 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:16.670 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:16.670 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -a 10.0.0.3 -s 8009 -o json 00:10:16.670 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:16.670 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:16.930 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:16.930 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:16.930 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:16.930 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:16.930 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:16.930 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:16.930 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:16.930 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:16.930 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:16.930 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:16.930 rmmod nvme_tcp 00:10:16.930 rmmod nvme_fabrics 00:10:16.930 rmmod nvme_keyring 00:10:16.930 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:16.930 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:16.930 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:16.930 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 72837 ']' 00:10:16.930 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 72837 00:10:16.930 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 72837 ']' 00:10:16.930 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 72837 00:10:16.930 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:16.930 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:16.930 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72837 00:10:16.930 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:16.930 killing process with pid 72837 00:10:16.930 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:16.930 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72837' 00:10:16.930 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 72837 00:10:16.930 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 72837 00:10:17.189 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:17.189 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:17.189 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:17.189 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:17.189 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:17.189 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:17.189 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:17.189 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:17.189 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:17.189 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:17.189 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:17.189 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:17.189 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:17.453 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:17.453 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:17.453 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:17.453 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:17.453 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:17.453 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:17.453 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:17.453 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:17.453 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:17.453 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:17.453 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.453 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.453 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.453 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@300 -- # return 0 00:10:17.453 ************************************ 00:10:17.453 END TEST nvmf_referrals 00:10:17.453 ************************************ 00:10:17.453 00:10:17.453 real 0m3.402s 00:10:17.453 user 0m9.743s 00:10:17.453 sys 0m1.119s 00:10:17.453 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.453 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:17.453 17:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:17.453 17:09:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:17.453 17:09:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.453 17:09:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:17.453 ************************************ 00:10:17.453 START TEST nvmf_connect_disconnect 00:10:17.453 ************************************ 00:10:17.453 17:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:17.729 * Looking for test storage... 00:10:17.729 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:17.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.729 --rc genhtml_branch_coverage=1 00:10:17.729 --rc genhtml_function_coverage=1 00:10:17.729 --rc genhtml_legend=1 00:10:17.729 --rc geninfo_all_blocks=1 00:10:17.729 --rc geninfo_unexecuted_blocks=1 00:10:17.729 00:10:17.729 ' 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:17.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.729 --rc genhtml_branch_coverage=1 00:10:17.729 --rc genhtml_function_coverage=1 00:10:17.729 --rc genhtml_legend=1 00:10:17.729 --rc geninfo_all_blocks=1 00:10:17.729 --rc geninfo_unexecuted_blocks=1 00:10:17.729 00:10:17.729 ' 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:17.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.729 --rc genhtml_branch_coverage=1 00:10:17.729 --rc genhtml_function_coverage=1 00:10:17.729 --rc genhtml_legend=1 00:10:17.729 --rc geninfo_all_blocks=1 00:10:17.729 --rc geninfo_unexecuted_blocks=1 00:10:17.729 00:10:17.729 ' 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:17.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.729 --rc genhtml_branch_coverage=1 00:10:17.729 --rc genhtml_function_coverage=1 00:10:17.729 --rc genhtml_legend=1 00:10:17.729 --rc geninfo_all_blocks=1 00:10:17.729 --rc geninfo_unexecuted_blocks=1 00:10:17.729 00:10:17.729 ' 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:17.729 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:17.730 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:17.730 Cannot find device "nvmf_init_br" 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:17.730 Cannot find device "nvmf_init_br2" 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:17.730 Cannot find device "nvmf_tgt_br" 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # true 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:17.730 Cannot find device "nvmf_tgt_br2" 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # true 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:17.730 Cannot find device "nvmf_init_br" 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # true 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:17.730 Cannot find device "nvmf_init_br2" 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # true 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:17.730 Cannot find device "nvmf_tgt_br" 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # true 00:10:17.730 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:18.010 Cannot find device "nvmf_tgt_br2" 00:10:18.010 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # true 00:10:18.010 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:18.010 Cannot find device "nvmf_br" 00:10:18.010 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # true 00:10:18.010 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:18.010 Cannot find device "nvmf_init_if" 00:10:18.010 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # true 00:10:18.010 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:18.010 Cannot find device "nvmf_init_if2" 00:10:18.010 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # true 00:10:18.010 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:18.010 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:18.010 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # true 00:10:18.010 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:18.010 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:18.010 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # true 00:10:18.010 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:18.010 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:18.010 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:18.010 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:18.010 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:18.010 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:18.010 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:18.010 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:18.010 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:18.010 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:18.010 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:18.010 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:18.010 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:18.010 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:18.010 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:18.011 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:18.011 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:18.011 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:18.011 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:18.011 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:18.011 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:18.011 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:18.011 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:18.011 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:18.011 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:18.011 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:18.011 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:18.011 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:18.011 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:18.011 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:18.011 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:18.011 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:18.011 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:18.011 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:18.011 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:10:18.011 00:10:18.011 --- 10.0.0.3 ping statistics --- 00:10:18.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.011 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:10:18.011 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:18.011 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:18.011 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:10:18.011 00:10:18.011 --- 10.0.0.4 ping statistics --- 00:10:18.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.011 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:10:18.011 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:18.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:10:18.011 00:10:18.011 --- 10.0.0.1 ping statistics --- 00:10:18.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.011 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:10:18.011 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:18.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:10:18.271 00:10:18.271 --- 10.0.0.2 ping statistics --- 00:10:18.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.271 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:10:18.271 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.271 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@461 -- # return 0 00:10:18.271 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:18.271 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.271 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:18.271 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:18.271 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.271 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:18.271 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:18.271 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:18.271 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:18.271 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:18.271 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:18.271 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=73191 00:10:18.271 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:18.271 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 73191 00:10:18.271 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 73191 ']' 00:10:18.271 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.271 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.271 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.271 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.271 17:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:18.271 [2024-11-19 17:09:10.642239] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:10:18.271 [2024-11-19 17:09:10.642336] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.271 [2024-11-19 17:09:10.789839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:18.529 [2024-11-19 17:09:10.843718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.529 [2024-11-19 17:09:10.843850] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.529 [2024-11-19 17:09:10.843880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.529 [2024-11-19 17:09:10.843888] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.529 [2024-11-19 17:09:10.843896] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.529 [2024-11-19 17:09:10.845477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.529 [2024-11-19 17:09:10.845582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.529 [2024-11-19 17:09:10.845754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.529 [2024-11-19 17:09:10.845774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.529 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.529 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:10:18.529 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:18.529 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:18.529 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:18.529 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.529 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:18.529 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.529 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:18.786 [2024-11-19 17:09:11.061565] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.786 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.787 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:18.787 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.787 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:18.787 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.787 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:18.787 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:18.787 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.787 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:18.787 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.787 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:18.787 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.787 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:18.787 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.787 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:18.787 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.787 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:18.787 [2024-11-19 17:09:11.139773] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:18.787 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.787 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:18.787 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:18.787 17:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:21.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.197 17:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:30.197 17:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:30.197 17:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:30.197 17:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:10:30.197 17:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:30.197 17:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:10:30.197 17:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:30.197 17:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:30.197 rmmod nvme_tcp 00:10:30.197 rmmod nvme_fabrics 00:10:30.197 rmmod nvme_keyring 00:10:30.456 17:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:30.456 17:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:10:30.456 17:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:10:30.456 17:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 73191 ']' 00:10:30.456 17:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 73191 00:10:30.456 17:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 73191 ']' 00:10:30.456 17:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 73191 00:10:30.456 17:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:10:30.456 17:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.456 17:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73191 00:10:30.456 17:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.456 17:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.456 killing process with pid 73191 00:10:30.456 17:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73191' 00:10:30.456 17:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 73191 00:10:30.456 17:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 73191 00:10:30.715 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:30.715 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:30.715 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:30.715 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:10:30.715 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:10:30.715 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:30.715 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:10:30.715 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:30.715 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:30.715 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:30.715 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:30.715 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:30.715 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:30.715 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:30.715 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:30.715 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:30.715 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:30.715 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:30.716 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:30.974 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:30.974 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:30.974 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:30.974 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:30.974 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.974 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.974 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.974 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@300 -- # return 0 00:10:30.974 00:10:30.974 real 0m13.415s 00:10:30.974 user 0m48.045s 00:10:30.974 sys 0m1.784s 00:10:30.974 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.974 ************************************ 00:10:30.974 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:30.974 END TEST nvmf_connect_disconnect 00:10:30.975 ************************************ 00:10:30.975 17:09:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:30.975 17:09:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:30.975 17:09:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.975 17:09:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:30.975 ************************************ 00:10:30.975 START TEST nvmf_multitarget 00:10:30.975 ************************************ 00:10:30.975 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:30.975 * Looking for test storage... 00:10:31.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:31.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.235 --rc genhtml_branch_coverage=1 00:10:31.235 --rc genhtml_function_coverage=1 00:10:31.235 --rc genhtml_legend=1 00:10:31.235 --rc geninfo_all_blocks=1 00:10:31.235 --rc geninfo_unexecuted_blocks=1 00:10:31.235 00:10:31.235 ' 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:31.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.235 --rc genhtml_branch_coverage=1 00:10:31.235 --rc genhtml_function_coverage=1 00:10:31.235 --rc genhtml_legend=1 00:10:31.235 --rc geninfo_all_blocks=1 00:10:31.235 --rc geninfo_unexecuted_blocks=1 00:10:31.235 00:10:31.235 ' 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:31.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.235 --rc genhtml_branch_coverage=1 00:10:31.235 --rc genhtml_function_coverage=1 00:10:31.235 --rc genhtml_legend=1 00:10:31.235 --rc geninfo_all_blocks=1 00:10:31.235 --rc geninfo_unexecuted_blocks=1 00:10:31.235 00:10:31.235 ' 00:10:31.235 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:31.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.235 --rc genhtml_branch_coverage=1 00:10:31.235 --rc genhtml_function_coverage=1 00:10:31.235 --rc genhtml_legend=1 00:10:31.235 --rc geninfo_all_blocks=1 00:10:31.235 --rc geninfo_unexecuted_blocks=1 00:10:31.235 00:10:31.235 ' 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:31.236 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:31.236 Cannot find device "nvmf_init_br" 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:31.236 Cannot find device "nvmf_init_br2" 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:31.236 Cannot find device "nvmf_tgt_br" 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # true 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:31.236 Cannot find device "nvmf_tgt_br2" 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # true 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:31.236 Cannot find device "nvmf_init_br" 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # true 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:31.236 Cannot find device "nvmf_init_br2" 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # true 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:31.236 Cannot find device "nvmf_tgt_br" 00:10:31.236 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # true 00:10:31.237 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:31.237 Cannot find device "nvmf_tgt_br2" 00:10:31.237 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # true 00:10:31.237 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:31.237 Cannot find device "nvmf_br" 00:10:31.237 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # true 00:10:31.237 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:31.237 Cannot find device "nvmf_init_if" 00:10:31.237 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # true 00:10:31.237 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:31.237 Cannot find device "nvmf_init_if2" 00:10:31.237 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # true 00:10:31.237 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:31.237 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:31.237 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # true 00:10:31.237 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:31.237 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:31.237 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # true 00:10:31.237 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:31.496 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:31.496 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:31.496 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:31.496 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:31.496 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:31.496 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:31.496 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:31.496 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:31.496 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:31.496 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:31.496 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:31.496 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:31.496 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:31.496 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:31.496 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:31.496 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:31.496 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:31.496 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:31.496 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:31.496 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:31.496 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:31.496 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:31.496 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:31.496 17:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:31.496 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:31.755 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:31.755 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:31.755 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:31.755 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:31.755 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:31.755 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:31.755 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:31.755 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:31.755 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:10:31.755 00:10:31.755 --- 10.0.0.3 ping statistics --- 00:10:31.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.755 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:10:31.755 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:31.755 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:31.755 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:10:31.755 00:10:31.755 --- 10.0.0.4 ping statistics --- 00:10:31.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.755 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:10:31.755 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:31.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:31.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:10:31.755 00:10:31.755 --- 10.0.0.1 ping statistics --- 00:10:31.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.755 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:10:31.755 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:31.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:31.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:10:31.755 00:10:31.755 --- 10.0.0.2 ping statistics --- 00:10:31.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.755 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:10:31.755 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:31.755 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@461 -- # return 0 00:10:31.755 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:31.755 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:31.755 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:31.755 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:31.755 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:31.755 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:31.755 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:31.755 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:31.755 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:31.755 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:31.755 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:31.755 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=73632 00:10:31.755 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 73632 00:10:31.755 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:31.755 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 73632 ']' 00:10:31.755 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.756 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:31.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.756 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.756 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:31.756 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:31.756 [2024-11-19 17:09:24.163178] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:10:31.756 [2024-11-19 17:09:24.163272] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.014 [2024-11-19 17:09:24.314819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:32.014 [2024-11-19 17:09:24.371716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.014 [2024-11-19 17:09:24.371816] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.014 [2024-11-19 17:09:24.371831] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.014 [2024-11-19 17:09:24.371841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.014 [2024-11-19 17:09:24.371849] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.014 [2024-11-19 17:09:24.373385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.014 [2024-11-19 17:09:24.373551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.014 [2024-11-19 17:09:24.373620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:32.014 [2024-11-19 17:09:24.373628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.014 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:32.014 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:10:32.014 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:32.014 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:32.014 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:32.273 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:32.273 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:32.273 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:32.273 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:32.273 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:32.273 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:32.532 "nvmf_tgt_1" 00:10:32.532 17:09:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:32.532 "nvmf_tgt_2" 00:10:32.532 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:32.532 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:32.791 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:32.791 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:32.791 true 00:10:33.049 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:33.049 true 00:10:33.049 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:33.049 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:33.308 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:33.308 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:33.308 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:33.308 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:33.308 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:10:33.308 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:33.308 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:10:33.308 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:33.308 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:33.308 rmmod nvme_tcp 00:10:33.308 rmmod nvme_fabrics 00:10:33.308 rmmod nvme_keyring 00:10:33.308 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:33.308 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:10:33.308 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:10:33.308 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 73632 ']' 00:10:33.308 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 73632 00:10:33.308 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 73632 ']' 00:10:33.308 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 73632 00:10:33.308 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:10:33.308 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:33.308 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73632 00:10:33.308 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:33.308 killing process with pid 73632 00:10:33.308 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:33.308 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73632' 00:10:33.308 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 73632 00:10:33.308 17:09:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 73632 00:10:33.567 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:33.567 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:33.567 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:33.567 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:10:33.567 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:10:33.567 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:33.567 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:10:33.567 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:33.567 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:33.567 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:33.567 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:33.567 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:33.567 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:33.567 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:33.567 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:33.826 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:33.826 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:33.826 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:33.826 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:33.826 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:33.826 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:33.826 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:33.826 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:33.826 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.826 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.826 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.826 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@300 -- # return 0 00:10:33.826 00:10:33.826 real 0m2.843s 00:10:33.826 user 0m7.717s 00:10:33.826 sys 0m0.885s 00:10:33.826 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.826 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:33.826 ************************************ 00:10:33.826 END TEST nvmf_multitarget 00:10:33.826 ************************************ 00:10:33.826 17:09:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:33.826 17:09:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:33.826 17:09:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.826 17:09:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:33.826 ************************************ 00:10:33.826 START TEST nvmf_rpc 00:10:33.826 ************************************ 00:10:33.826 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:34.086 * Looking for test storage... 00:10:34.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:34.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.086 --rc genhtml_branch_coverage=1 00:10:34.086 --rc genhtml_function_coverage=1 00:10:34.086 --rc genhtml_legend=1 00:10:34.086 --rc geninfo_all_blocks=1 00:10:34.086 --rc geninfo_unexecuted_blocks=1 00:10:34.086 00:10:34.086 ' 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:34.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.086 --rc genhtml_branch_coverage=1 00:10:34.086 --rc genhtml_function_coverage=1 00:10:34.086 --rc genhtml_legend=1 00:10:34.086 --rc geninfo_all_blocks=1 00:10:34.086 --rc geninfo_unexecuted_blocks=1 00:10:34.086 00:10:34.086 ' 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:34.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.086 --rc genhtml_branch_coverage=1 00:10:34.086 --rc genhtml_function_coverage=1 00:10:34.086 --rc genhtml_legend=1 00:10:34.086 --rc geninfo_all_blocks=1 00:10:34.086 --rc geninfo_unexecuted_blocks=1 00:10:34.086 00:10:34.086 ' 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:34.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.086 --rc genhtml_branch_coverage=1 00:10:34.086 --rc genhtml_function_coverage=1 00:10:34.086 --rc genhtml_legend=1 00:10:34.086 --rc geninfo_all_blocks=1 00:10:34.086 --rc geninfo_unexecuted_blocks=1 00:10:34.086 00:10:34.086 ' 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.086 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:34.087 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:34.087 Cannot find device "nvmf_init_br" 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:34.087 Cannot find device "nvmf_init_br2" 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:34.087 Cannot find device "nvmf_tgt_br" 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # true 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:34.087 Cannot find device "nvmf_tgt_br2" 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # true 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:34.087 Cannot find device "nvmf_init_br" 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # true 00:10:34.087 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:34.346 Cannot find device "nvmf_init_br2" 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # true 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:34.346 Cannot find device "nvmf_tgt_br" 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # true 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:34.346 Cannot find device "nvmf_tgt_br2" 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # true 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:34.346 Cannot find device "nvmf_br" 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # true 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:34.346 Cannot find device "nvmf_init_if" 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # true 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:34.346 Cannot find device "nvmf_init_if2" 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # true 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:34.346 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # true 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:34.346 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # true 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:34.346 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:34.605 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:34.605 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:34.605 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:34.605 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:34.605 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:34.605 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:34.605 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:34.605 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:34.605 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:34.605 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:34.605 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:34.605 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:10:34.605 00:10:34.605 --- 10.0.0.3 ping statistics --- 00:10:34.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.605 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:10:34.605 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:34.605 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:34.605 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:10:34.605 00:10:34.605 --- 10.0.0.4 ping statistics --- 00:10:34.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.605 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:10:34.605 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:34.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:34.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:10:34.605 00:10:34.605 --- 10.0.0.1 ping statistics --- 00:10:34.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.605 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:10:34.605 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:34.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:34.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:10:34.605 00:10:34.605 --- 10.0.0.2 ping statistics --- 00:10:34.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.605 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:10:34.605 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:34.605 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@461 -- # return 0 00:10:34.605 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:34.605 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:34.605 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:34.605 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:34.605 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:34.605 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:34.605 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:34.605 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:34.605 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:34.606 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:34.606 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:34.606 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=73909 00:10:34.606 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:34.606 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 73909 00:10:34.606 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 73909 ']' 00:10:34.606 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.606 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.606 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.606 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.606 17:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:34.606 [2024-11-19 17:09:27.052100] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:10:34.606 [2024-11-19 17:09:27.052222] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:34.864 [2024-11-19 17:09:27.200207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:34.864 [2024-11-19 17:09:27.271583] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:34.864 [2024-11-19 17:09:27.271644] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:34.864 [2024-11-19 17:09:27.271656] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:34.864 [2024-11-19 17:09:27.271664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:34.864 [2024-11-19 17:09:27.271672] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:34.864 [2024-11-19 17:09:27.273350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:34.864 [2024-11-19 17:09:27.273492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:34.864 [2024-11-19 17:09:27.273622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:34.864 [2024-11-19 17:09:27.273643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:35.800 "poll_groups": [ 00:10:35.800 { 00:10:35.800 "admin_qpairs": 0, 00:10:35.800 "completed_nvme_io": 0, 00:10:35.800 "current_admin_qpairs": 0, 00:10:35.800 "current_io_qpairs": 0, 00:10:35.800 "io_qpairs": 0, 00:10:35.800 "name": "nvmf_tgt_poll_group_000", 00:10:35.800 "pending_bdev_io": 0, 00:10:35.800 "transports": [] 00:10:35.800 }, 00:10:35.800 { 00:10:35.800 "admin_qpairs": 0, 00:10:35.800 "completed_nvme_io": 0, 00:10:35.800 "current_admin_qpairs": 0, 00:10:35.800 "current_io_qpairs": 0, 00:10:35.800 "io_qpairs": 0, 00:10:35.800 "name": "nvmf_tgt_poll_group_001", 00:10:35.800 "pending_bdev_io": 0, 00:10:35.800 "transports": [] 00:10:35.800 }, 00:10:35.800 { 00:10:35.800 "admin_qpairs": 0, 00:10:35.800 "completed_nvme_io": 0, 00:10:35.800 "current_admin_qpairs": 0, 00:10:35.800 "current_io_qpairs": 0, 00:10:35.800 "io_qpairs": 0, 00:10:35.800 "name": "nvmf_tgt_poll_group_002", 00:10:35.800 "pending_bdev_io": 0, 00:10:35.800 "transports": [] 00:10:35.800 }, 00:10:35.800 { 00:10:35.800 "admin_qpairs": 0, 00:10:35.800 "completed_nvme_io": 0, 00:10:35.800 "current_admin_qpairs": 0, 00:10:35.800 "current_io_qpairs": 0, 00:10:35.800 "io_qpairs": 0, 00:10:35.800 "name": "nvmf_tgt_poll_group_003", 00:10:35.800 "pending_bdev_io": 0, 00:10:35.800 "transports": [] 00:10:35.800 } 00:10:35.800 ], 00:10:35.800 "tick_rate": 2200000000 00:10:35.800 }' 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.800 [2024-11-19 17:09:28.205600] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:35.800 "poll_groups": [ 00:10:35.800 { 00:10:35.800 "admin_qpairs": 0, 00:10:35.800 "completed_nvme_io": 0, 00:10:35.800 "current_admin_qpairs": 0, 00:10:35.800 "current_io_qpairs": 0, 00:10:35.800 "io_qpairs": 0, 00:10:35.800 "name": "nvmf_tgt_poll_group_000", 00:10:35.800 "pending_bdev_io": 0, 00:10:35.800 "transports": [ 00:10:35.800 { 00:10:35.800 "trtype": "TCP" 00:10:35.800 } 00:10:35.800 ] 00:10:35.800 }, 00:10:35.800 { 00:10:35.800 "admin_qpairs": 0, 00:10:35.800 "completed_nvme_io": 0, 00:10:35.800 "current_admin_qpairs": 0, 00:10:35.800 "current_io_qpairs": 0, 00:10:35.800 "io_qpairs": 0, 00:10:35.800 "name": "nvmf_tgt_poll_group_001", 00:10:35.800 "pending_bdev_io": 0, 00:10:35.800 "transports": [ 00:10:35.800 { 00:10:35.800 "trtype": "TCP" 00:10:35.800 } 00:10:35.800 ] 00:10:35.800 }, 00:10:35.800 { 00:10:35.800 "admin_qpairs": 0, 00:10:35.800 "completed_nvme_io": 0, 00:10:35.800 "current_admin_qpairs": 0, 00:10:35.800 "current_io_qpairs": 0, 00:10:35.800 "io_qpairs": 0, 00:10:35.800 "name": "nvmf_tgt_poll_group_002", 00:10:35.800 "pending_bdev_io": 0, 00:10:35.800 "transports": [ 00:10:35.800 { 00:10:35.800 "trtype": "TCP" 00:10:35.800 } 00:10:35.800 ] 00:10:35.800 }, 00:10:35.800 { 00:10:35.800 "admin_qpairs": 0, 00:10:35.800 "completed_nvme_io": 0, 00:10:35.800 "current_admin_qpairs": 0, 00:10:35.800 "current_io_qpairs": 0, 00:10:35.800 "io_qpairs": 0, 00:10:35.800 "name": "nvmf_tgt_poll_group_003", 00:10:35.800 "pending_bdev_io": 0, 00:10:35.800 "transports": [ 00:10:35.800 { 00:10:35.800 "trtype": "TCP" 00:10:35.800 } 00:10:35.800 ] 00:10:35.800 } 00:10:35.800 ], 00:10:35.800 "tick_rate": 2200000000 00:10:35.800 }' 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:35.800 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.060 Malloc1 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.060 [2024-11-19 17:09:28.430474] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -a 10.0.0.3 -s 4420 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -a 10.0.0.3 -s 4420 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -a 10.0.0.3 -s 4420 00:10:36.060 [2024-11-19 17:09:28.459060] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07' 00:10:36.060 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:36.060 could not add new controller: failed to write to nvme-fabrics device 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.060 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.061 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:36.320 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:36.320 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:36.320 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:36.320 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:36.320 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:38.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:38.290 [2024-11-19 17:09:30.791136] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07' 00:10:38.290 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:38.290 could not add new controller: failed to write to nvme-fabrics device 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.290 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:38.548 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:38.548 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:38.548 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:38.548 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:38.548 17:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:40.488 17:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:40.488 17:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:40.488 17:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:40.488 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:40.488 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:40.488 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:40.488 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:40.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.747 [2024-11-19 17:09:33.102838] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.747 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:41.005 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:41.005 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:41.005 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:41.005 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:41.005 17:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:42.908 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:42.908 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:42.908 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:42.908 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:42.908 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:42.908 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:42.908 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:42.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.908 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:42.908 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:42.908 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:42.908 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:42.908 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:42.908 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:42.908 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:42.908 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:42.909 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.909 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.909 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.909 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:42.909 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.909 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.909 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.909 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:42.909 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:42.909 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.909 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.909 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.909 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:42.909 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.909 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.909 [2024-11-19 17:09:35.431736] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:42.909 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.909 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:42.909 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.909 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.168 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.168 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:43.168 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.168 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.168 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.168 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:43.168 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:43.168 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:43.168 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:43.168 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:43.168 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:45.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.700 [2024-11-19 17:09:37.749364] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:45.700 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:47.604 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:47.604 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:47.604 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:47.604 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:47.604 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:47.604 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:47.604 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:47.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.604 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:47.604 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:47.604 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:47.604 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.604 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:47.604 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.923 [2024-11-19 17:09:40.171723] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:47.923 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:49.855 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:49.855 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:49.855 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:49.855 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:49.855 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:49.855 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:49.855 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:50.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.114 [2024-11-19 17:09:42.472969] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.114 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:50.373 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:50.373 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:50.373 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:50.373 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:50.373 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:52.275 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:52.275 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:52.275 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:52.275 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:52.275 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:52.275 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:52.275 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:52.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.275 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:52.275 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:52.275 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:52.275 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:52.275 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:52.275 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:52.275 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:52.275 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:52.275 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.275 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.275 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.275 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:52.275 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.275 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.275 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.275 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:52.275 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:52.275 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:52.275 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.275 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.535 [2024-11-19 17:09:44.818164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.535 [2024-11-19 17:09:44.866155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.535 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.536 [2024-11-19 17:09:44.914225] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.536 [2024-11-19 17:09:44.970297] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.536 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.536 [2024-11-19 17:09:45.018441] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.536 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.795 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.795 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:52.795 "poll_groups": [ 00:10:52.795 { 00:10:52.795 "admin_qpairs": 2, 00:10:52.795 "completed_nvme_io": 67, 00:10:52.795 "current_admin_qpairs": 0, 00:10:52.795 "current_io_qpairs": 0, 00:10:52.795 "io_qpairs": 16, 00:10:52.795 "name": "nvmf_tgt_poll_group_000", 00:10:52.795 "pending_bdev_io": 0, 00:10:52.795 "transports": [ 00:10:52.795 { 00:10:52.795 "trtype": "TCP" 00:10:52.795 } 00:10:52.795 ] 00:10:52.795 }, 00:10:52.795 { 00:10:52.795 "admin_qpairs": 3, 00:10:52.795 "completed_nvme_io": 69, 00:10:52.795 "current_admin_qpairs": 0, 00:10:52.795 "current_io_qpairs": 0, 00:10:52.795 "io_qpairs": 17, 00:10:52.795 "name": "nvmf_tgt_poll_group_001", 00:10:52.795 "pending_bdev_io": 0, 00:10:52.795 "transports": [ 00:10:52.795 { 00:10:52.795 "trtype": "TCP" 00:10:52.795 } 00:10:52.795 ] 00:10:52.795 }, 00:10:52.795 { 00:10:52.795 "admin_qpairs": 1, 00:10:52.795 "completed_nvme_io": 119, 00:10:52.795 "current_admin_qpairs": 0, 00:10:52.795 "current_io_qpairs": 0, 00:10:52.795 "io_qpairs": 19, 00:10:52.795 "name": "nvmf_tgt_poll_group_002", 00:10:52.795 "pending_bdev_io": 0, 00:10:52.795 "transports": [ 00:10:52.795 { 00:10:52.795 "trtype": "TCP" 00:10:52.795 } 00:10:52.795 ] 00:10:52.795 }, 00:10:52.795 { 00:10:52.795 "admin_qpairs": 1, 00:10:52.795 "completed_nvme_io": 165, 00:10:52.795 "current_admin_qpairs": 0, 00:10:52.795 "current_io_qpairs": 0, 00:10:52.795 "io_qpairs": 18, 00:10:52.795 "name": "nvmf_tgt_poll_group_003", 00:10:52.795 "pending_bdev_io": 0, 00:10:52.795 "transports": [ 00:10:52.795 { 00:10:52.795 "trtype": "TCP" 00:10:52.795 } 00:10:52.795 ] 00:10:52.795 } 00:10:52.795 ], 00:10:52.795 "tick_rate": 2200000000 00:10:52.795 }' 00:10:52.795 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:52.795 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:52.795 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:52.795 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:52.795 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:52.795 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:52.795 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:52.795 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:52.795 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:52.795 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:10:52.795 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:10:52.795 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:52.795 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:52.795 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:52.795 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:10:52.795 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:52.795 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:10:52.796 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:52.796 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:52.796 rmmod nvme_tcp 00:10:52.796 rmmod nvme_fabrics 00:10:52.796 rmmod nvme_keyring 00:10:52.796 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:52.796 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:10:52.796 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:10:52.796 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 73909 ']' 00:10:52.796 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 73909 00:10:52.796 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 73909 ']' 00:10:52.796 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 73909 00:10:52.796 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:10:52.796 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:52.796 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73909 00:10:52.796 killing process with pid 73909 00:10:52.796 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:52.796 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:52.796 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73909' 00:10:52.796 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 73909 00:10:52.796 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 73909 00:10:53.364 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:53.364 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:53.365 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:53.365 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:10:53.365 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:10:53.365 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:53.365 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:10:53.365 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:53.365 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:53.365 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:53.365 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:53.365 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:53.365 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:53.365 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:53.365 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:53.365 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:53.365 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:53.365 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:53.365 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:53.365 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:53.365 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:53.365 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:53.365 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:53.365 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.365 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:53.365 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.365 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@300 -- # return 0 00:10:53.365 00:10:53.365 real 0m19.577s 00:10:53.365 user 1m12.470s 00:10:53.365 sys 0m2.576s 00:10:53.365 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.365 ************************************ 00:10:53.365 END TEST nvmf_rpc 00:10:53.365 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.365 ************************************ 00:10:53.624 17:09:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:53.624 17:09:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:53.624 17:09:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.624 17:09:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:53.624 ************************************ 00:10:53.624 START TEST nvmf_invalid 00:10:53.624 ************************************ 00:10:53.624 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:53.624 * Looking for test storage... 00:10:53.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:53.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.624 --rc genhtml_branch_coverage=1 00:10:53.624 --rc genhtml_function_coverage=1 00:10:53.624 --rc genhtml_legend=1 00:10:53.624 --rc geninfo_all_blocks=1 00:10:53.624 --rc geninfo_unexecuted_blocks=1 00:10:53.624 00:10:53.624 ' 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:53.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.624 --rc genhtml_branch_coverage=1 00:10:53.624 --rc genhtml_function_coverage=1 00:10:53.624 --rc genhtml_legend=1 00:10:53.624 --rc geninfo_all_blocks=1 00:10:53.624 --rc geninfo_unexecuted_blocks=1 00:10:53.624 00:10:53.624 ' 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:53.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.624 --rc genhtml_branch_coverage=1 00:10:53.624 --rc genhtml_function_coverage=1 00:10:53.624 --rc genhtml_legend=1 00:10:53.624 --rc geninfo_all_blocks=1 00:10:53.624 --rc geninfo_unexecuted_blocks=1 00:10:53.624 00:10:53.624 ' 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:53.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.624 --rc genhtml_branch_coverage=1 00:10:53.624 --rc genhtml_function_coverage=1 00:10:53.624 --rc genhtml_legend=1 00:10:53.624 --rc geninfo_all_blocks=1 00:10:53.624 --rc geninfo_unexecuted_blocks=1 00:10:53.624 00:10:53.624 ' 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.624 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.625 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.625 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.625 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.625 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.625 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.625 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.883 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:10:53.883 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:10:53.883 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.883 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.883 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:53.883 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:53.883 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:53.883 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:10:53.883 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.883 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.883 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.883 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:53.884 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:53.884 Cannot find device "nvmf_init_br" 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:53.884 Cannot find device "nvmf_init_br2" 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:53.884 Cannot find device "nvmf_tgt_br" 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # true 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:53.884 Cannot find device "nvmf_tgt_br2" 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # true 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:53.884 Cannot find device "nvmf_init_br" 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # true 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:53.884 Cannot find device "nvmf_init_br2" 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # true 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:53.884 Cannot find device "nvmf_tgt_br" 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # true 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:53.884 Cannot find device "nvmf_tgt_br2" 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # true 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:53.884 Cannot find device "nvmf_br" 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # true 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:53.884 Cannot find device "nvmf_init_if" 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # true 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:53.884 Cannot find device "nvmf_init_if2" 00:10:53.884 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # true 00:10:53.885 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:53.885 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:53.885 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # true 00:10:53.885 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:53.885 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:53.885 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # true 00:10:53.885 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:53.885 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:53.885 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:53.885 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:53.885 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:53.885 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:53.885 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:54.144 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:54.144 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:10:54.144 00:10:54.144 --- 10.0.0.3 ping statistics --- 00:10:54.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.144 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:54.144 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:54.144 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:10:54.144 00:10:54.144 --- 10.0.0.4 ping statistics --- 00:10:54.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.144 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:54.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:54.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:10:54.144 00:10:54.144 --- 10.0.0.1 ping statistics --- 00:10:54.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.144 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:54.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:54.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:10:54.144 00:10:54.144 --- 10.0.0.2 ping statistics --- 00:10:54.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.144 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@461 -- # return 0 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=74477 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 74477 00:10:54.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 74477 ']' 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:54.144 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:54.403 [2024-11-19 17:09:46.699621] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:10:54.403 [2024-11-19 17:09:46.699723] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.403 [2024-11-19 17:09:46.854163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:54.403 [2024-11-19 17:09:46.928768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.403 [2024-11-19 17:09:46.928863] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.403 [2024-11-19 17:09:46.928880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:54.403 [2024-11-19 17:09:46.928893] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:54.403 [2024-11-19 17:09:46.928904] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.403 [2024-11-19 17:09:46.930531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.403 [2024-11-19 17:09:46.930708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.403 [2024-11-19 17:09:46.930871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:54.403 [2024-11-19 17:09:46.930878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.662 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:54.662 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:10:54.662 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:54.662 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:54.662 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:54.662 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.662 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:54.662 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode18945 00:10:54.921 [2024-11-19 17:09:47.444282] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:55.179 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/11/19 17:09:47 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18945 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:10:55.179 request: 00:10:55.179 { 00:10:55.179 "method": "nvmf_create_subsystem", 00:10:55.179 "params": { 00:10:55.179 "nqn": "nqn.2016-06.io.spdk:cnode18945", 00:10:55.180 "tgt_name": "foobar" 00:10:55.180 } 00:10:55.180 } 00:10:55.180 Got JSON-RPC error response 00:10:55.180 GoRPCClient: error on JSON-RPC call' 00:10:55.180 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/11/19 17:09:47 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18945 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:10:55.180 request: 00:10:55.180 { 00:10:55.180 "method": "nvmf_create_subsystem", 00:10:55.180 "params": { 00:10:55.180 "nqn": "nqn.2016-06.io.spdk:cnode18945", 00:10:55.180 "tgt_name": "foobar" 00:10:55.180 } 00:10:55.180 } 00:10:55.180 Got JSON-RPC error response 00:10:55.180 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:55.180 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:55.180 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode13371 00:10:55.438 [2024-11-19 17:09:47.792857] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13371: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:55.438 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/11/19 17:09:47 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13371 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:10:55.438 request: 00:10:55.438 { 00:10:55.438 "method": "nvmf_create_subsystem", 00:10:55.438 "params": { 00:10:55.438 "nqn": "nqn.2016-06.io.spdk:cnode13371", 00:10:55.438 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:10:55.438 } 00:10:55.438 } 00:10:55.438 Got JSON-RPC error response 00:10:55.438 GoRPCClient: error on JSON-RPC call' 00:10:55.438 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/11/19 17:09:47 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13371 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:10:55.438 request: 00:10:55.438 { 00:10:55.438 "method": "nvmf_create_subsystem", 00:10:55.438 "params": { 00:10:55.438 "nqn": "nqn.2016-06.io.spdk:cnode13371", 00:10:55.438 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:10:55.438 } 00:10:55.438 } 00:10:55.438 Got JSON-RPC error response 00:10:55.438 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:55.438 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:55.438 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode2648 00:10:55.697 [2024-11-19 17:09:48.169407] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2648: invalid model number 'SPDK_Controller' 00:10:55.697 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/11/19 17:09:48 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode2648], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:10:55.697 request: 00:10:55.697 { 00:10:55.697 "method": "nvmf_create_subsystem", 00:10:55.697 "params": { 00:10:55.697 "nqn": "nqn.2016-06.io.spdk:cnode2648", 00:10:55.697 "model_number": "SPDK_Controller\u001f" 00:10:55.697 } 00:10:55.697 } 00:10:55.697 Got JSON-RPC error response 00:10:55.697 GoRPCClient: error on JSON-RPC call' 00:10:55.697 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/11/19 17:09:48 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode2648], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:10:55.697 request: 00:10:55.697 { 00:10:55.697 "method": "nvmf_create_subsystem", 00:10:55.697 "params": { 00:10:55.697 "nqn": "nqn.2016-06.io.spdk:cnode2648", 00:10:55.697 "model_number": "SPDK_Controller\u001f" 00:10:55.697 } 00:10:55.697 } 00:10:55.697 Got JSON-RPC error response 00:10:55.697 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:55.697 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:55.697 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:55.697 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:55.697 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:55.697 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:55.697 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:55.697 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:55.697 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:10:55.697 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:10:55.697 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:10:55.697 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:55.697 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:55.697 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:10:55.697 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:10:55.697 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:10:55.697 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:55.697 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:55.697 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:10:55.697 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:10:55.697 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:10:55.697 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:55.697 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:55.697 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:10:55.697 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:10:55.698 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:10:55.698 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:55.698 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:55.698 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:10:55.698 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:10:55.698 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:10:55.698 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:55.698 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ z == \- ]] 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'z /X00hcH*9a\LZmmM=Cp' 00:10:55.957 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'z /X00hcH*9a\LZmmM=Cp' nqn.2016-06.io.spdk:cnode20104 00:10:56.217 [2024-11-19 17:09:48.614082] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20104: invalid serial number 'z /X00hcH*9a\LZmmM=Cp' 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/11/19 17:09:48 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode20104 serial_number:z /X00hcH*9a\LZmmM=Cp], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN z /X00hcH*9a\LZmmM=Cp 00:10:56.217 request: 00:10:56.217 { 00:10:56.217 "method": "nvmf_create_subsystem", 00:10:56.217 "params": { 00:10:56.217 "nqn": "nqn.2016-06.io.spdk:cnode20104", 00:10:56.217 "serial_number": "z /X00hcH*9a\\LZmmM=Cp" 00:10:56.217 } 00:10:56.217 } 00:10:56.217 Got JSON-RPC error response 00:10:56.217 GoRPCClient: error on JSON-RPC call' 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/11/19 17:09:48 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode20104 serial_number:z /X00hcH*9a\LZmmM=Cp], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN z /X00hcH*9a\LZmmM=Cp 00:10:56.217 request: 00:10:56.217 { 00:10:56.217 "method": "nvmf_create_subsystem", 00:10:56.217 "params": { 00:10:56.217 "nqn": "nqn.2016-06.io.spdk:cnode20104", 00:10:56.217 "serial_number": "z /X00hcH*9a\\LZmmM=Cp" 00:10:56.217 } 00:10:56.217 } 00:10:56.217 Got JSON-RPC error response 00:10:56.217 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:10:56.217 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:10:56.218 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.218 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.218 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:10:56.218 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:10:56.218 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:10:56.218 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.218 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.218 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:10:56.218 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:56.218 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:10:56.218 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.218 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.218 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:10:56.218 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:10:56.218 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:10:56.218 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.218 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.477 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ K == \- ]] 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'K}BK")\'\''+kSF,Qi-d}a>@%HUICOw$Y9Q]ACgFSN&' 00:10:56.478 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'K}BK")\'\''+kSF,Qi-d}a>@%HUICOw$Y9Q]ACgFSN&' nqn.2016-06.io.spdk:cnode23198 00:10:56.736 [2024-11-19 17:09:49.202914] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23198: invalid model number 'K}BK")\'+kSF,Qi-d}a>@%HUICOw$Y9Q]ACgFSN&' 00:10:56.736 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/11/19 17:09:49 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:K}BK")\'\''+kSF,Qi-d}a>@%HUICOw$Y9Q]ACgFSN& nqn:nqn.2016-06.io.spdk:cnode23198], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN K}BK")\'\''+kSF,Qi-d}a>@%HUICOw$Y9Q]ACgFSN& 00:10:56.736 request: 00:10:56.736 { 00:10:56.736 "method": "nvmf_create_subsystem", 00:10:56.736 "params": { 00:10:56.736 "nqn": "nqn.2016-06.io.spdk:cnode23198", 00:10:56.736 "model_number": "K}BK\")\\'\''+kSF,Qi-d}a>@%HUICOw$Y9Q]AC\u007fgFSN&" 00:10:56.736 } 00:10:56.736 } 00:10:56.736 Got JSON-RPC error response 00:10:56.736 GoRPCClient: error on JSON-RPC call' 00:10:56.736 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/11/19 17:09:49 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:K}BK")\'+kSF,Qi-d}a>@%HUICOw$Y9Q]ACgFSN& nqn:nqn.2016-06.io.spdk:cnode23198], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN K}BK")\'+kSF,Qi-d}a>@%HUICOw$Y9Q]ACgFSN& 00:10:56.736 request: 00:10:56.736 { 00:10:56.736 "method": "nvmf_create_subsystem", 00:10:56.736 "params": { 00:10:56.736 "nqn": "nqn.2016-06.io.spdk:cnode23198", 00:10:56.736 "model_number": "K}BK\")\\'+kSF,Qi-d}a>@%HUICOw$Y9Q]AC\u007fgFSN&" 00:10:56.736 } 00:10:56.736 } 00:10:56.736 Got JSON-RPC error response 00:10:56.736 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:56.736 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:10:56.994 [2024-11-19 17:09:49.483391] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.994 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:10:57.559 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:10:57.559 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:10:57.559 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:10:57.559 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:10:57.559 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:10:57.818 [2024-11-19 17:09:50.168838] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:10:57.818 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/11/19 17:09:50 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:10:57.818 request: 00:10:57.818 { 00:10:57.818 "method": "nvmf_subsystem_remove_listener", 00:10:57.818 "params": { 00:10:57.818 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:57.818 "listen_address": { 00:10:57.818 "trtype": "tcp", 00:10:57.818 "traddr": "", 00:10:57.818 "trsvcid": "4421" 00:10:57.818 } 00:10:57.818 } 00:10:57.818 } 00:10:57.818 Got JSON-RPC error response 00:10:57.818 GoRPCClient: error on JSON-RPC call' 00:10:57.818 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/11/19 17:09:50 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:10:57.818 request: 00:10:57.818 { 00:10:57.818 "method": "nvmf_subsystem_remove_listener", 00:10:57.818 "params": { 00:10:57.818 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:57.818 "listen_address": { 00:10:57.818 "trtype": "tcp", 00:10:57.818 "traddr": "", 00:10:57.818 "trsvcid": "4421" 00:10:57.818 } 00:10:57.818 } 00:10:57.818 } 00:10:57.818 Got JSON-RPC error response 00:10:57.818 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:10:57.818 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8692 -i 0 00:10:58.077 [2024-11-19 17:09:50.489234] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8692: invalid cntlid range [0-65519] 00:10:58.077 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/11/19 17:09:50 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode8692], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:10:58.077 request: 00:10:58.077 { 00:10:58.077 "method": "nvmf_create_subsystem", 00:10:58.077 "params": { 00:10:58.077 "nqn": "nqn.2016-06.io.spdk:cnode8692", 00:10:58.077 "min_cntlid": 0 00:10:58.077 } 00:10:58.077 } 00:10:58.077 Got JSON-RPC error response 00:10:58.077 GoRPCClient: error on JSON-RPC call' 00:10:58.077 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/11/19 17:09:50 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode8692], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:10:58.077 request: 00:10:58.077 { 00:10:58.077 "method": "nvmf_create_subsystem", 00:10:58.077 "params": { 00:10:58.077 "nqn": "nqn.2016-06.io.spdk:cnode8692", 00:10:58.077 "min_cntlid": 0 00:10:58.077 } 00:10:58.077 } 00:10:58.077 Got JSON-RPC error response 00:10:58.077 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:58.078 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26278 -i 65520 00:10:58.336 [2024-11-19 17:09:50.769619] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26278: invalid cntlid range [65520-65519] 00:10:58.336 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/11/19 17:09:50 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode26278], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:10:58.337 request: 00:10:58.337 { 00:10:58.337 "method": "nvmf_create_subsystem", 00:10:58.337 "params": { 00:10:58.337 "nqn": "nqn.2016-06.io.spdk:cnode26278", 00:10:58.337 "min_cntlid": 65520 00:10:58.337 } 00:10:58.337 } 00:10:58.337 Got JSON-RPC error response 00:10:58.337 GoRPCClient: error on JSON-RPC call' 00:10:58.337 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/11/19 17:09:50 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode26278], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:10:58.337 request: 00:10:58.337 { 00:10:58.337 "method": "nvmf_create_subsystem", 00:10:58.337 "params": { 00:10:58.337 "nqn": "nqn.2016-06.io.spdk:cnode26278", 00:10:58.337 "min_cntlid": 65520 00:10:58.337 } 00:10:58.337 } 00:10:58.337 Got JSON-RPC error response 00:10:58.337 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:58.337 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23279 -I 0 00:10:58.595 [2024-11-19 17:09:51.098179] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23279: invalid cntlid range [1-0] 00:10:58.595 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/11/19 17:09:51 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode23279], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:10:58.595 request: 00:10:58.595 { 00:10:58.595 "method": "nvmf_create_subsystem", 00:10:58.595 "params": { 00:10:58.595 "nqn": "nqn.2016-06.io.spdk:cnode23279", 00:10:58.595 "max_cntlid": 0 00:10:58.595 } 00:10:58.595 } 00:10:58.595 Got JSON-RPC error response 00:10:58.595 GoRPCClient: error on JSON-RPC call' 00:10:58.596 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/11/19 17:09:51 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode23279], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:10:58.596 request: 00:10:58.596 { 00:10:58.596 "method": "nvmf_create_subsystem", 00:10:58.596 "params": { 00:10:58.596 "nqn": "nqn.2016-06.io.spdk:cnode23279", 00:10:58.596 "max_cntlid": 0 00:10:58.596 } 00:10:58.596 } 00:10:58.596 Got JSON-RPC error response 00:10:58.596 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:58.596 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22473 -I 65520 00:10:59.162 [2024-11-19 17:09:51.426639] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22473: invalid cntlid range [1-65520] 00:10:59.162 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/11/19 17:09:51 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode22473], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:10:59.162 request: 00:10:59.162 { 00:10:59.162 "method": "nvmf_create_subsystem", 00:10:59.162 "params": { 00:10:59.162 "nqn": "nqn.2016-06.io.spdk:cnode22473", 00:10:59.162 "max_cntlid": 65520 00:10:59.162 } 00:10:59.162 } 00:10:59.162 Got JSON-RPC error response 00:10:59.162 GoRPCClient: error on JSON-RPC call' 00:10:59.162 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/11/19 17:09:51 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode22473], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:10:59.162 request: 00:10:59.162 { 00:10:59.162 "method": "nvmf_create_subsystem", 00:10:59.162 "params": { 00:10:59.162 "nqn": "nqn.2016-06.io.spdk:cnode22473", 00:10:59.162 "max_cntlid": 65520 00:10:59.162 } 00:10:59.162 } 00:10:59.162 Got JSON-RPC error response 00:10:59.162 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:59.162 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32524 -i 6 -I 5 00:10:59.421 [2024-11-19 17:09:51.767200] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32524: invalid cntlid range [6-5] 00:10:59.421 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/11/19 17:09:51 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode32524], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:10:59.421 request: 00:10:59.421 { 00:10:59.421 "method": "nvmf_create_subsystem", 00:10:59.421 "params": { 00:10:59.422 "nqn": "nqn.2016-06.io.spdk:cnode32524", 00:10:59.422 "min_cntlid": 6, 00:10:59.422 "max_cntlid": 5 00:10:59.422 } 00:10:59.422 } 00:10:59.422 Got JSON-RPC error response 00:10:59.422 GoRPCClient: error on JSON-RPC call' 00:10:59.422 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/11/19 17:09:51 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode32524], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:10:59.422 request: 00:10:59.422 { 00:10:59.422 "method": "nvmf_create_subsystem", 00:10:59.422 "params": { 00:10:59.422 "nqn": "nqn.2016-06.io.spdk:cnode32524", 00:10:59.422 "min_cntlid": 6, 00:10:59.422 "max_cntlid": 5 00:10:59.422 } 00:10:59.422 } 00:10:59.422 Got JSON-RPC error response 00:10:59.422 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:59.422 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:10:59.422 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:10:59.422 { 00:10:59.422 "name": "foobar", 00:10:59.422 "method": "nvmf_delete_target", 00:10:59.422 "req_id": 1 00:10:59.422 } 00:10:59.422 Got JSON-RPC error response 00:10:59.422 response: 00:10:59.422 { 00:10:59.422 "code": -32602, 00:10:59.422 "message": "The specified target doesn'\''t exist, cannot delete it." 00:10:59.422 }' 00:10:59.422 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:10:59.422 { 00:10:59.422 "name": "foobar", 00:10:59.422 "method": "nvmf_delete_target", 00:10:59.422 "req_id": 1 00:10:59.422 } 00:10:59.422 Got JSON-RPC error response 00:10:59.422 response: 00:10:59.422 { 00:10:59.422 "code": -32602, 00:10:59.422 "message": "The specified target doesn't exist, cannot delete it." 00:10:59.422 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:10:59.422 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:10:59.422 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:10:59.422 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:59.422 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:10:59.422 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:59.422 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:10:59.422 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:59.422 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:59.680 rmmod nvme_tcp 00:10:59.680 rmmod nvme_fabrics 00:10:59.680 rmmod nvme_keyring 00:10:59.680 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:59.680 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:10:59.680 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:10:59.680 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 74477 ']' 00:10:59.680 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 74477 00:10:59.680 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 74477 ']' 00:10:59.680 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 74477 00:10:59.680 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:10:59.680 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:59.680 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74477 00:10:59.680 killing process with pid 74477 00:10:59.680 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:59.680 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:59.680 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74477' 00:10:59.680 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 74477 00:10:59.680 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 74477 00:10:59.939 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:59.939 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:59.939 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:59.939 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:10:59.939 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:10:59.939 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:59.939 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:10:59.939 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:59.939 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:59.939 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:59.939 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:59.939 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:59.939 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:59.939 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:59.939 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:59.939 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:59.939 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:59.939 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:59.939 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:59.939 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:00.197 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:00.197 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:00.197 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:00.197 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.197 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.197 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.197 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@300 -- # return 0 00:11:00.197 ************************************ 00:11:00.197 END TEST nvmf_invalid 00:11:00.197 ************************************ 00:11:00.197 00:11:00.197 real 0m6.622s 00:11:00.197 user 0m25.467s 00:11:00.197 sys 0m1.570s 00:11:00.197 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.197 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:00.197 17:09:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:00.198 17:09:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:00.198 17:09:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.198 17:09:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:00.198 ************************************ 00:11:00.198 START TEST nvmf_connect_stress 00:11:00.198 ************************************ 00:11:00.198 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:00.198 * Looking for test storage... 00:11:00.198 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:00.198 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:00.198 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:11:00.198 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:00.457 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:00.457 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.457 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.457 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.457 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:00.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.458 --rc genhtml_branch_coverage=1 00:11:00.458 --rc genhtml_function_coverage=1 00:11:00.458 --rc genhtml_legend=1 00:11:00.458 --rc geninfo_all_blocks=1 00:11:00.458 --rc geninfo_unexecuted_blocks=1 00:11:00.458 00:11:00.458 ' 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:00.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.458 --rc genhtml_branch_coverage=1 00:11:00.458 --rc genhtml_function_coverage=1 00:11:00.458 --rc genhtml_legend=1 00:11:00.458 --rc geninfo_all_blocks=1 00:11:00.458 --rc geninfo_unexecuted_blocks=1 00:11:00.458 00:11:00.458 ' 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:00.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.458 --rc genhtml_branch_coverage=1 00:11:00.458 --rc genhtml_function_coverage=1 00:11:00.458 --rc genhtml_legend=1 00:11:00.458 --rc geninfo_all_blocks=1 00:11:00.458 --rc geninfo_unexecuted_blocks=1 00:11:00.458 00:11:00.458 ' 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:00.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.458 --rc genhtml_branch_coverage=1 00:11:00.458 --rc genhtml_function_coverage=1 00:11:00.458 --rc genhtml_legend=1 00:11:00.458 --rc geninfo_all_blocks=1 00:11:00.458 --rc geninfo_unexecuted_blocks=1 00:11:00.458 00:11:00.458 ' 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.458 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:00.458 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:00.459 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:00.459 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:00.459 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:00.459 Cannot find device "nvmf_init_br" 00:11:00.459 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:11:00.459 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:00.459 Cannot find device "nvmf_init_br2" 00:11:00.459 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:11:00.459 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:00.459 Cannot find device "nvmf_tgt_br" 00:11:00.459 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # true 00:11:00.459 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:00.459 Cannot find device "nvmf_tgt_br2" 00:11:00.459 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # true 00:11:00.459 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:00.459 Cannot find device "nvmf_init_br" 00:11:00.459 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # true 00:11:00.459 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:00.459 Cannot find device "nvmf_init_br2" 00:11:00.459 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # true 00:11:00.459 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:00.459 Cannot find device "nvmf_tgt_br" 00:11:00.459 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # true 00:11:00.459 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:00.459 Cannot find device "nvmf_tgt_br2" 00:11:00.459 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # true 00:11:00.459 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:00.459 Cannot find device "nvmf_br" 00:11:00.459 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # true 00:11:00.459 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:00.459 Cannot find device "nvmf_init_if" 00:11:00.459 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # true 00:11:00.459 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:00.718 Cannot find device "nvmf_init_if2" 00:11:00.718 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # true 00:11:00.718 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:00.718 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:00.718 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # true 00:11:00.718 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:00.718 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # true 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:00.718 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:00.718 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:11:00.718 00:11:00.718 --- 10.0.0.3 ping statistics --- 00:11:00.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.718 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:00.718 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:00.718 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:11:00.718 00:11:00.718 --- 10.0.0.4 ping statistics --- 00:11:00.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.718 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:00.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:11:00.718 00:11:00.718 --- 10.0.0.1 ping statistics --- 00:11:00.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.718 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:00.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:11:00.718 00:11:00.718 --- 10.0.0.2 ping statistics --- 00:11:00.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.718 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@461 -- # return 0 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:00.718 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:00.977 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:00.977 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:00.977 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:00.977 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.977 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=75035 00:11:00.977 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:00.977 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 75035 00:11:00.977 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 75035 ']' 00:11:00.977 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.977 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:00.977 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.977 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:00.977 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.977 [2024-11-19 17:09:53.316880] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:11:00.977 [2024-11-19 17:09:53.316987] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.977 [2024-11-19 17:09:53.468900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:01.235 [2024-11-19 17:09:53.531015] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.235 [2024-11-19 17:09:53.531355] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.235 [2024-11-19 17:09:53.531549] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.235 [2024-11-19 17:09:53.531784] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.235 [2024-11-19 17:09:53.531997] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.235 [2024-11-19 17:09:53.533368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.235 [2024-11-19 17:09:53.533508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:01.235 [2024-11-19 17:09:53.533511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.235 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.235 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:11:01.235 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:01.235 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:01.235 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.235 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.235 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:01.235 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.235 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.235 [2024-11-19 17:09:53.713599] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.235 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.235 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:01.235 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.235 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.235 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.235 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:01.235 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.235 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.235 [2024-11-19 17:09:53.737800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:01.235 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.236 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:01.236 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.236 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.236 NULL1 00:11:01.236 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.236 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=75074 00:11:01.236 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:01.236 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:11:01.236 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:11:01.236 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:01.236 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.236 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.494 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.754 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.754 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:01.754 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.754 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.754 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.013 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.013 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:02.013 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.013 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.013 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.580 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.580 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:02.580 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.580 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.580 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.838 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.838 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:02.838 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.838 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.838 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.096 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.096 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:03.096 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.096 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.096 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.355 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.355 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:03.355 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.355 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.355 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.613 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.613 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:03.613 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.613 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.613 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.178 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.179 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:04.179 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.179 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.179 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.436 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.436 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:04.436 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.436 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.436 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.694 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.694 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:04.694 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.694 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.694 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.952 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.952 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:04.952 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.952 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.952 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.209 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.209 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:05.209 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.209 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.209 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.775 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.775 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:05.775 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.775 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.775 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.034 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.034 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:06.034 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.034 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.034 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.292 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.292 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:06.292 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.292 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.292 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.551 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.551 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:06.551 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.551 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.551 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.816 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.816 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:06.816 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.816 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.816 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.395 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.395 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:07.395 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.395 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.395 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.655 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.655 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:07.655 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.655 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.655 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.913 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.913 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:07.913 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.913 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.913 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.171 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.171 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:08.171 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.171 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.171 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.429 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.429 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:08.429 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.429 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.429 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.995 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.995 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:08.995 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.995 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.995 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.253 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.253 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:09.253 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.253 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.253 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.511 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.511 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:09.511 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.511 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.511 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.770 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.770 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:09.770 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.770 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.770 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.337 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.337 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:10.337 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.337 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.337 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.595 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.595 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:10.595 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.595 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.595 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.853 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.853 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:10.853 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.853 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.853 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.112 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.112 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:11.112 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.112 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.112 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.371 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.371 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:11.371 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.371 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.371 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.629 Testing NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:11:11.889 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.889 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75074 00:11:11.889 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (75074) - No such process 00:11:11.889 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 75074 00:11:11.889 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:11:11.889 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:11.889 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:11.889 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:11.889 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:11:11.889 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:11.889 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:11:11.889 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:11.889 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:11.889 rmmod nvme_tcp 00:11:11.889 rmmod nvme_fabrics 00:11:11.889 rmmod nvme_keyring 00:11:11.889 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:11.889 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:11:11.889 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:11:11.889 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 75035 ']' 00:11:11.889 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 75035 00:11:11.889 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 75035 ']' 00:11:11.889 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 75035 00:11:11.889 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:11:11.889 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.889 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75035 00:11:11.889 killing process with pid 75035 00:11:11.889 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:11.889 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:11.889 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75035' 00:11:11.889 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 75035 00:11:11.889 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 75035 00:11:12.148 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:12.148 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:12.148 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:12.148 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:11:12.148 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:11:12.148 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:12.148 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:11:12.148 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:12.148 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:12.148 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:12.148 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:12.148 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:12.148 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:12.148 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:12.148 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:12.148 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:12.149 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:12.149 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:12.408 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:12.408 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:12.408 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:12.408 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:12.408 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:12.408 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.408 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.408 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.408 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@300 -- # return 0 00:11:12.408 00:11:12.408 real 0m12.169s 00:11:12.408 user 0m39.809s 00:11:12.408 sys 0m3.137s 00:11:12.408 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.408 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.408 ************************************ 00:11:12.408 END TEST nvmf_connect_stress 00:11:12.408 ************************************ 00:11:12.408 17:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:12.408 17:10:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:12.408 17:10:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.408 17:10:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:12.408 ************************************ 00:11:12.408 START TEST nvmf_fused_ordering 00:11:12.408 ************************************ 00:11:12.408 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:12.408 * Looking for test storage... 00:11:12.408 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:12.408 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:12.408 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:11:12.408 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:12.669 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:12.669 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:12.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.669 --rc genhtml_branch_coverage=1 00:11:12.669 --rc genhtml_function_coverage=1 00:11:12.669 --rc genhtml_legend=1 00:11:12.669 --rc geninfo_all_blocks=1 00:11:12.669 --rc geninfo_unexecuted_blocks=1 00:11:12.669 00:11:12.669 ' 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:12.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.669 --rc genhtml_branch_coverage=1 00:11:12.669 --rc genhtml_function_coverage=1 00:11:12.669 --rc genhtml_legend=1 00:11:12.669 --rc geninfo_all_blocks=1 00:11:12.669 --rc geninfo_unexecuted_blocks=1 00:11:12.669 00:11:12.669 ' 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:12.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.669 --rc genhtml_branch_coverage=1 00:11:12.669 --rc genhtml_function_coverage=1 00:11:12.669 --rc genhtml_legend=1 00:11:12.669 --rc geninfo_all_blocks=1 00:11:12.669 --rc geninfo_unexecuted_blocks=1 00:11:12.669 00:11:12.669 ' 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:12.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.669 --rc genhtml_branch_coverage=1 00:11:12.669 --rc genhtml_function_coverage=1 00:11:12.669 --rc genhtml_legend=1 00:11:12.669 --rc geninfo_all_blocks=1 00:11:12.669 --rc geninfo_unexecuted_blocks=1 00:11:12.669 00:11:12.669 ' 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:12.669 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:12.669 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:12.670 Cannot find device "nvmf_init_br" 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:12.670 Cannot find device "nvmf_init_br2" 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:12.670 Cannot find device "nvmf_tgt_br" 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # true 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:12.670 Cannot find device "nvmf_tgt_br2" 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # true 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:12.670 Cannot find device "nvmf_init_br" 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # true 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:12.670 Cannot find device "nvmf_init_br2" 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # true 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:12.670 Cannot find device "nvmf_tgt_br" 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # true 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:12.670 Cannot find device "nvmf_tgt_br2" 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # true 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:12.670 Cannot find device "nvmf_br" 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # true 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:12.670 Cannot find device "nvmf_init_if" 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # true 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:12.670 Cannot find device "nvmf_init_if2" 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # true 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:12.670 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # true 00:11:12.670 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:12.670 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # true 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:12.930 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:12.930 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:11:12.930 00:11:12.930 --- 10.0.0.3 ping statistics --- 00:11:12.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.930 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:12.930 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:12.930 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:11:12.930 00:11:12.930 --- 10.0.0.4 ping statistics --- 00:11:12.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.930 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:12.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:12.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:11:12.930 00:11:12.930 --- 10.0.0.1 ping statistics --- 00:11:12.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.930 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:12.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:12.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:11:12.930 00:11:12.930 --- 10.0.0.2 ping statistics --- 00:11:12.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.930 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@461 -- # return 0 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:12.930 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:13.189 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:13.189 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:13.189 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:13.189 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:13.189 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=75454 00:11:13.189 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:13.190 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 75454 00:11:13.190 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 75454 ']' 00:11:13.190 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.190 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.190 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.190 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.190 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:13.190 [2024-11-19 17:10:05.532271] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:11:13.190 [2024-11-19 17:10:05.532381] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.190 [2024-11-19 17:10:05.685334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.448 [2024-11-19 17:10:05.753643] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:13.448 [2024-11-19 17:10:05.753707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:13.448 [2024-11-19 17:10:05.753731] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:13.448 [2024-11-19 17:10:05.753757] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:13.448 [2024-11-19 17:10:05.753768] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:13.448 [2024-11-19 17:10:05.754256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.448 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.448 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:11:13.448 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:13.448 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:13.448 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:13.448 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.448 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:13.448 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.448 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:13.448 [2024-11-19 17:10:05.936271] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.448 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.448 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:13.448 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.448 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:13.448 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.448 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:13.448 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.448 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:13.448 [2024-11-19 17:10:05.952398] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:13.448 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.448 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:13.448 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.448 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:13.448 NULL1 00:11:13.448 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.448 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:13.448 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.448 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:13.449 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.449 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:13.449 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.449 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:13.707 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.707 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:13.707 [2024-11-19 17:10:06.006462] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:11:13.707 [2024-11-19 17:10:06.006510] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75492 ] 00:11:13.965 Attached to nqn.2016-06.io.spdk:cnode1 00:11:13.966 Namespace ID: 1 size: 1GB 00:11:13.966 fused_ordering(0) 00:11:13.966 fused_ordering(1) 00:11:13.966 fused_ordering(2) 00:11:13.966 fused_ordering(3) 00:11:13.966 fused_ordering(4) 00:11:13.966 fused_ordering(5) 00:11:13.966 fused_ordering(6) 00:11:13.966 fused_ordering(7) 00:11:13.966 fused_ordering(8) 00:11:13.966 fused_ordering(9) 00:11:13.966 fused_ordering(10) 00:11:13.966 fused_ordering(11) 00:11:13.966 fused_ordering(12) 00:11:13.966 fused_ordering(13) 00:11:13.966 fused_ordering(14) 00:11:13.966 fused_ordering(15) 00:11:13.966 fused_ordering(16) 00:11:13.966 fused_ordering(17) 00:11:13.966 fused_ordering(18) 00:11:13.966 fused_ordering(19) 00:11:13.966 fused_ordering(20) 00:11:13.966 fused_ordering(21) 00:11:13.966 fused_ordering(22) 00:11:13.966 fused_ordering(23) 00:11:13.966 fused_ordering(24) 00:11:13.966 fused_ordering(25) 00:11:13.966 fused_ordering(26) 00:11:13.966 fused_ordering(27) 00:11:13.966 fused_ordering(28) 00:11:13.966 fused_ordering(29) 00:11:13.966 fused_ordering(30) 00:11:13.966 fused_ordering(31) 00:11:13.966 fused_ordering(32) 00:11:13.966 fused_ordering(33) 00:11:13.966 fused_ordering(34) 00:11:13.966 fused_ordering(35) 00:11:13.966 fused_ordering(36) 00:11:13.966 fused_ordering(37) 00:11:13.966 fused_ordering(38) 00:11:13.966 fused_ordering(39) 00:11:13.966 fused_ordering(40) 00:11:13.966 fused_ordering(41) 00:11:13.966 fused_ordering(42) 00:11:13.966 fused_ordering(43) 00:11:13.966 fused_ordering(44) 00:11:13.966 fused_ordering(45) 00:11:13.966 fused_ordering(46) 00:11:13.966 fused_ordering(47) 00:11:13.966 fused_ordering(48) 00:11:13.966 fused_ordering(49) 00:11:13.966 fused_ordering(50) 00:11:13.966 fused_ordering(51) 00:11:13.966 fused_ordering(52) 00:11:13.966 fused_ordering(53) 00:11:13.966 fused_ordering(54) 00:11:13.966 fused_ordering(55) 00:11:13.966 fused_ordering(56) 00:11:13.966 fused_ordering(57) 00:11:13.966 fused_ordering(58) 00:11:13.966 fused_ordering(59) 00:11:13.966 fused_ordering(60) 00:11:13.966 fused_ordering(61) 00:11:13.966 fused_ordering(62) 00:11:13.966 fused_ordering(63) 00:11:13.966 fused_ordering(64) 00:11:13.966 fused_ordering(65) 00:11:13.966 fused_ordering(66) 00:11:13.966 fused_ordering(67) 00:11:13.966 fused_ordering(68) 00:11:13.966 fused_ordering(69) 00:11:13.966 fused_ordering(70) 00:11:13.966 fused_ordering(71) 00:11:13.966 fused_ordering(72) 00:11:13.966 fused_ordering(73) 00:11:13.966 fused_ordering(74) 00:11:13.966 fused_ordering(75) 00:11:13.966 fused_ordering(76) 00:11:13.966 fused_ordering(77) 00:11:13.966 fused_ordering(78) 00:11:13.966 fused_ordering(79) 00:11:13.966 fused_ordering(80) 00:11:13.966 fused_ordering(81) 00:11:13.966 fused_ordering(82) 00:11:13.966 fused_ordering(83) 00:11:13.966 fused_ordering(84) 00:11:13.966 fused_ordering(85) 00:11:13.966 fused_ordering(86) 00:11:13.966 fused_ordering(87) 00:11:13.966 fused_ordering(88) 00:11:13.966 fused_ordering(89) 00:11:13.966 fused_ordering(90) 00:11:13.966 fused_ordering(91) 00:11:13.966 fused_ordering(92) 00:11:13.966 fused_ordering(93) 00:11:13.966 fused_ordering(94) 00:11:13.966 fused_ordering(95) 00:11:13.966 fused_ordering(96) 00:11:13.966 fused_ordering(97) 00:11:13.966 fused_ordering(98) 00:11:13.966 fused_ordering(99) 00:11:13.966 fused_ordering(100) 00:11:13.966 fused_ordering(101) 00:11:13.966 fused_ordering(102) 00:11:13.966 fused_ordering(103) 00:11:13.966 fused_ordering(104) 00:11:13.966 fused_ordering(105) 00:11:13.966 fused_ordering(106) 00:11:13.966 fused_ordering(107) 00:11:13.966 fused_ordering(108) 00:11:13.966 fused_ordering(109) 00:11:13.966 fused_ordering(110) 00:11:13.966 fused_ordering(111) 00:11:13.966 fused_ordering(112) 00:11:13.966 fused_ordering(113) 00:11:13.966 fused_ordering(114) 00:11:13.966 fused_ordering(115) 00:11:13.966 fused_ordering(116) 00:11:13.966 fused_ordering(117) 00:11:13.966 fused_ordering(118) 00:11:13.966 fused_ordering(119) 00:11:13.966 fused_ordering(120) 00:11:13.966 fused_ordering(121) 00:11:13.966 fused_ordering(122) 00:11:13.966 fused_ordering(123) 00:11:13.966 fused_ordering(124) 00:11:13.966 fused_ordering(125) 00:11:13.966 fused_ordering(126) 00:11:13.966 fused_ordering(127) 00:11:13.966 fused_ordering(128) 00:11:13.966 fused_ordering(129) 00:11:13.966 fused_ordering(130) 00:11:13.966 fused_ordering(131) 00:11:13.966 fused_ordering(132) 00:11:13.966 fused_ordering(133) 00:11:13.966 fused_ordering(134) 00:11:13.966 fused_ordering(135) 00:11:13.966 fused_ordering(136) 00:11:13.966 fused_ordering(137) 00:11:13.966 fused_ordering(138) 00:11:13.966 fused_ordering(139) 00:11:13.966 fused_ordering(140) 00:11:13.966 fused_ordering(141) 00:11:13.966 fused_ordering(142) 00:11:13.966 fused_ordering(143) 00:11:13.966 fused_ordering(144) 00:11:13.966 fused_ordering(145) 00:11:13.966 fused_ordering(146) 00:11:13.966 fused_ordering(147) 00:11:13.966 fused_ordering(148) 00:11:13.966 fused_ordering(149) 00:11:13.966 fused_ordering(150) 00:11:13.966 fused_ordering(151) 00:11:13.966 fused_ordering(152) 00:11:13.966 fused_ordering(153) 00:11:13.966 fused_ordering(154) 00:11:13.966 fused_ordering(155) 00:11:13.966 fused_ordering(156) 00:11:13.966 fused_ordering(157) 00:11:13.966 fused_ordering(158) 00:11:13.966 fused_ordering(159) 00:11:13.966 fused_ordering(160) 00:11:13.966 fused_ordering(161) 00:11:13.966 fused_ordering(162) 00:11:13.966 fused_ordering(163) 00:11:13.966 fused_ordering(164) 00:11:13.966 fused_ordering(165) 00:11:13.966 fused_ordering(166) 00:11:13.966 fused_ordering(167) 00:11:13.966 fused_ordering(168) 00:11:13.966 fused_ordering(169) 00:11:13.966 fused_ordering(170) 00:11:13.966 fused_ordering(171) 00:11:13.966 fused_ordering(172) 00:11:13.966 fused_ordering(173) 00:11:13.966 fused_ordering(174) 00:11:13.966 fused_ordering(175) 00:11:13.966 fused_ordering(176) 00:11:13.966 fused_ordering(177) 00:11:13.966 fused_ordering(178) 00:11:13.966 fused_ordering(179) 00:11:13.966 fused_ordering(180) 00:11:13.966 fused_ordering(181) 00:11:13.966 fused_ordering(182) 00:11:13.966 fused_ordering(183) 00:11:13.966 fused_ordering(184) 00:11:13.966 fused_ordering(185) 00:11:13.966 fused_ordering(186) 00:11:13.966 fused_ordering(187) 00:11:13.966 fused_ordering(188) 00:11:13.966 fused_ordering(189) 00:11:13.966 fused_ordering(190) 00:11:13.966 fused_ordering(191) 00:11:13.966 fused_ordering(192) 00:11:13.966 fused_ordering(193) 00:11:13.966 fused_ordering(194) 00:11:13.966 fused_ordering(195) 00:11:13.966 fused_ordering(196) 00:11:13.966 fused_ordering(197) 00:11:13.966 fused_ordering(198) 00:11:13.966 fused_ordering(199) 00:11:13.966 fused_ordering(200) 00:11:13.966 fused_ordering(201) 00:11:13.966 fused_ordering(202) 00:11:13.966 fused_ordering(203) 00:11:13.966 fused_ordering(204) 00:11:13.966 fused_ordering(205) 00:11:14.225 fused_ordering(206) 00:11:14.226 fused_ordering(207) 00:11:14.226 fused_ordering(208) 00:11:14.226 fused_ordering(209) 00:11:14.226 fused_ordering(210) 00:11:14.226 fused_ordering(211) 00:11:14.226 fused_ordering(212) 00:11:14.226 fused_ordering(213) 00:11:14.226 fused_ordering(214) 00:11:14.226 fused_ordering(215) 00:11:14.226 fused_ordering(216) 00:11:14.226 fused_ordering(217) 00:11:14.226 fused_ordering(218) 00:11:14.226 fused_ordering(219) 00:11:14.226 fused_ordering(220) 00:11:14.226 fused_ordering(221) 00:11:14.226 fused_ordering(222) 00:11:14.226 fused_ordering(223) 00:11:14.226 fused_ordering(224) 00:11:14.226 fused_ordering(225) 00:11:14.226 fused_ordering(226) 00:11:14.226 fused_ordering(227) 00:11:14.226 fused_ordering(228) 00:11:14.226 fused_ordering(229) 00:11:14.226 fused_ordering(230) 00:11:14.226 fused_ordering(231) 00:11:14.226 fused_ordering(232) 00:11:14.226 fused_ordering(233) 00:11:14.226 fused_ordering(234) 00:11:14.226 fused_ordering(235) 00:11:14.226 fused_ordering(236) 00:11:14.226 fused_ordering(237) 00:11:14.226 fused_ordering(238) 00:11:14.226 fused_ordering(239) 00:11:14.226 fused_ordering(240) 00:11:14.226 fused_ordering(241) 00:11:14.226 fused_ordering(242) 00:11:14.226 fused_ordering(243) 00:11:14.226 fused_ordering(244) 00:11:14.226 fused_ordering(245) 00:11:14.226 fused_ordering(246) 00:11:14.226 fused_ordering(247) 00:11:14.226 fused_ordering(248) 00:11:14.226 fused_ordering(249) 00:11:14.226 fused_ordering(250) 00:11:14.226 fused_ordering(251) 00:11:14.226 fused_ordering(252) 00:11:14.226 fused_ordering(253) 00:11:14.226 fused_ordering(254) 00:11:14.226 fused_ordering(255) 00:11:14.226 fused_ordering(256) 00:11:14.226 fused_ordering(257) 00:11:14.226 fused_ordering(258) 00:11:14.226 fused_ordering(259) 00:11:14.226 fused_ordering(260) 00:11:14.226 fused_ordering(261) 00:11:14.226 fused_ordering(262) 00:11:14.226 fused_ordering(263) 00:11:14.226 fused_ordering(264) 00:11:14.226 fused_ordering(265) 00:11:14.226 fused_ordering(266) 00:11:14.226 fused_ordering(267) 00:11:14.226 fused_ordering(268) 00:11:14.226 fused_ordering(269) 00:11:14.226 fused_ordering(270) 00:11:14.226 fused_ordering(271) 00:11:14.226 fused_ordering(272) 00:11:14.226 fused_ordering(273) 00:11:14.226 fused_ordering(274) 00:11:14.226 fused_ordering(275) 00:11:14.226 fused_ordering(276) 00:11:14.226 fused_ordering(277) 00:11:14.226 fused_ordering(278) 00:11:14.226 fused_ordering(279) 00:11:14.226 fused_ordering(280) 00:11:14.226 fused_ordering(281) 00:11:14.226 fused_ordering(282) 00:11:14.226 fused_ordering(283) 00:11:14.226 fused_ordering(284) 00:11:14.226 fused_ordering(285) 00:11:14.226 fused_ordering(286) 00:11:14.226 fused_ordering(287) 00:11:14.226 fused_ordering(288) 00:11:14.226 fused_ordering(289) 00:11:14.226 fused_ordering(290) 00:11:14.226 fused_ordering(291) 00:11:14.226 fused_ordering(292) 00:11:14.226 fused_ordering(293) 00:11:14.226 fused_ordering(294) 00:11:14.226 fused_ordering(295) 00:11:14.226 fused_ordering(296) 00:11:14.226 fused_ordering(297) 00:11:14.226 fused_ordering(298) 00:11:14.226 fused_ordering(299) 00:11:14.226 fused_ordering(300) 00:11:14.226 fused_ordering(301) 00:11:14.226 fused_ordering(302) 00:11:14.226 fused_ordering(303) 00:11:14.226 fused_ordering(304) 00:11:14.226 fused_ordering(305) 00:11:14.226 fused_ordering(306) 00:11:14.226 fused_ordering(307) 00:11:14.226 fused_ordering(308) 00:11:14.226 fused_ordering(309) 00:11:14.226 fused_ordering(310) 00:11:14.226 fused_ordering(311) 00:11:14.226 fused_ordering(312) 00:11:14.226 fused_ordering(313) 00:11:14.226 fused_ordering(314) 00:11:14.226 fused_ordering(315) 00:11:14.226 fused_ordering(316) 00:11:14.226 fused_ordering(317) 00:11:14.226 fused_ordering(318) 00:11:14.226 fused_ordering(319) 00:11:14.226 fused_ordering(320) 00:11:14.226 fused_ordering(321) 00:11:14.226 fused_ordering(322) 00:11:14.226 fused_ordering(323) 00:11:14.226 fused_ordering(324) 00:11:14.226 fused_ordering(325) 00:11:14.226 fused_ordering(326) 00:11:14.226 fused_ordering(327) 00:11:14.226 fused_ordering(328) 00:11:14.226 fused_ordering(329) 00:11:14.226 fused_ordering(330) 00:11:14.226 fused_ordering(331) 00:11:14.226 fused_ordering(332) 00:11:14.226 fused_ordering(333) 00:11:14.226 fused_ordering(334) 00:11:14.226 fused_ordering(335) 00:11:14.226 fused_ordering(336) 00:11:14.226 fused_ordering(337) 00:11:14.226 fused_ordering(338) 00:11:14.226 fused_ordering(339) 00:11:14.226 fused_ordering(340) 00:11:14.226 fused_ordering(341) 00:11:14.226 fused_ordering(342) 00:11:14.226 fused_ordering(343) 00:11:14.226 fused_ordering(344) 00:11:14.226 fused_ordering(345) 00:11:14.226 fused_ordering(346) 00:11:14.226 fused_ordering(347) 00:11:14.226 fused_ordering(348) 00:11:14.226 fused_ordering(349) 00:11:14.226 fused_ordering(350) 00:11:14.226 fused_ordering(351) 00:11:14.226 fused_ordering(352) 00:11:14.226 fused_ordering(353) 00:11:14.226 fused_ordering(354) 00:11:14.226 fused_ordering(355) 00:11:14.226 fused_ordering(356) 00:11:14.226 fused_ordering(357) 00:11:14.226 fused_ordering(358) 00:11:14.226 fused_ordering(359) 00:11:14.226 fused_ordering(360) 00:11:14.226 fused_ordering(361) 00:11:14.226 fused_ordering(362) 00:11:14.226 fused_ordering(363) 00:11:14.226 fused_ordering(364) 00:11:14.226 fused_ordering(365) 00:11:14.226 fused_ordering(366) 00:11:14.226 fused_ordering(367) 00:11:14.226 fused_ordering(368) 00:11:14.226 fused_ordering(369) 00:11:14.226 fused_ordering(370) 00:11:14.226 fused_ordering(371) 00:11:14.226 fused_ordering(372) 00:11:14.226 fused_ordering(373) 00:11:14.226 fused_ordering(374) 00:11:14.226 fused_ordering(375) 00:11:14.226 fused_ordering(376) 00:11:14.226 fused_ordering(377) 00:11:14.226 fused_ordering(378) 00:11:14.226 fused_ordering(379) 00:11:14.226 fused_ordering(380) 00:11:14.226 fused_ordering(381) 00:11:14.226 fused_ordering(382) 00:11:14.226 fused_ordering(383) 00:11:14.226 fused_ordering(384) 00:11:14.226 fused_ordering(385) 00:11:14.226 fused_ordering(386) 00:11:14.226 fused_ordering(387) 00:11:14.226 fused_ordering(388) 00:11:14.226 fused_ordering(389) 00:11:14.226 fused_ordering(390) 00:11:14.226 fused_ordering(391) 00:11:14.226 fused_ordering(392) 00:11:14.226 fused_ordering(393) 00:11:14.226 fused_ordering(394) 00:11:14.226 fused_ordering(395) 00:11:14.226 fused_ordering(396) 00:11:14.226 fused_ordering(397) 00:11:14.226 fused_ordering(398) 00:11:14.226 fused_ordering(399) 00:11:14.226 fused_ordering(400) 00:11:14.226 fused_ordering(401) 00:11:14.226 fused_ordering(402) 00:11:14.226 fused_ordering(403) 00:11:14.226 fused_ordering(404) 00:11:14.226 fused_ordering(405) 00:11:14.226 fused_ordering(406) 00:11:14.226 fused_ordering(407) 00:11:14.226 fused_ordering(408) 00:11:14.226 fused_ordering(409) 00:11:14.226 fused_ordering(410) 00:11:14.794 fused_ordering(411) 00:11:14.794 fused_ordering(412) 00:11:14.794 fused_ordering(413) 00:11:14.794 fused_ordering(414) 00:11:14.794 fused_ordering(415) 00:11:14.794 fused_ordering(416) 00:11:14.794 fused_ordering(417) 00:11:14.794 fused_ordering(418) 00:11:14.794 fused_ordering(419) 00:11:14.794 fused_ordering(420) 00:11:14.794 fused_ordering(421) 00:11:14.794 fused_ordering(422) 00:11:14.794 fused_ordering(423) 00:11:14.794 fused_ordering(424) 00:11:14.794 fused_ordering(425) 00:11:14.794 fused_ordering(426) 00:11:14.794 fused_ordering(427) 00:11:14.794 fused_ordering(428) 00:11:14.794 fused_ordering(429) 00:11:14.794 fused_ordering(430) 00:11:14.794 fused_ordering(431) 00:11:14.794 fused_ordering(432) 00:11:14.794 fused_ordering(433) 00:11:14.794 fused_ordering(434) 00:11:14.794 fused_ordering(435) 00:11:14.794 fused_ordering(436) 00:11:14.794 fused_ordering(437) 00:11:14.794 fused_ordering(438) 00:11:14.794 fused_ordering(439) 00:11:14.794 fused_ordering(440) 00:11:14.794 fused_ordering(441) 00:11:14.794 fused_ordering(442) 00:11:14.794 fused_ordering(443) 00:11:14.794 fused_ordering(444) 00:11:14.794 fused_ordering(445) 00:11:14.794 fused_ordering(446) 00:11:14.794 fused_ordering(447) 00:11:14.794 fused_ordering(448) 00:11:14.794 fused_ordering(449) 00:11:14.794 fused_ordering(450) 00:11:14.794 fused_ordering(451) 00:11:14.794 fused_ordering(452) 00:11:14.794 fused_ordering(453) 00:11:14.794 fused_ordering(454) 00:11:14.794 fused_ordering(455) 00:11:14.794 fused_ordering(456) 00:11:14.794 fused_ordering(457) 00:11:14.794 fused_ordering(458) 00:11:14.794 fused_ordering(459) 00:11:14.794 fused_ordering(460) 00:11:14.794 fused_ordering(461) 00:11:14.794 fused_ordering(462) 00:11:14.794 fused_ordering(463) 00:11:14.794 fused_ordering(464) 00:11:14.794 fused_ordering(465) 00:11:14.794 fused_ordering(466) 00:11:14.794 fused_ordering(467) 00:11:14.794 fused_ordering(468) 00:11:14.794 fused_ordering(469) 00:11:14.794 fused_ordering(470) 00:11:14.794 fused_ordering(471) 00:11:14.794 fused_ordering(472) 00:11:14.794 fused_ordering(473) 00:11:14.794 fused_ordering(474) 00:11:14.794 fused_ordering(475) 00:11:14.794 fused_ordering(476) 00:11:14.794 fused_ordering(477) 00:11:14.794 fused_ordering(478) 00:11:14.794 fused_ordering(479) 00:11:14.794 fused_ordering(480) 00:11:14.794 fused_ordering(481) 00:11:14.794 fused_ordering(482) 00:11:14.794 fused_ordering(483) 00:11:14.794 fused_ordering(484) 00:11:14.794 fused_ordering(485) 00:11:14.794 fused_ordering(486) 00:11:14.794 fused_ordering(487) 00:11:14.794 fused_ordering(488) 00:11:14.794 fused_ordering(489) 00:11:14.794 fused_ordering(490) 00:11:14.794 fused_ordering(491) 00:11:14.794 fused_ordering(492) 00:11:14.794 fused_ordering(493) 00:11:14.794 fused_ordering(494) 00:11:14.794 fused_ordering(495) 00:11:14.794 fused_ordering(496) 00:11:14.794 fused_ordering(497) 00:11:14.794 fused_ordering(498) 00:11:14.794 fused_ordering(499) 00:11:14.794 fused_ordering(500) 00:11:14.794 fused_ordering(501) 00:11:14.794 fused_ordering(502) 00:11:14.794 fused_ordering(503) 00:11:14.794 fused_ordering(504) 00:11:14.794 fused_ordering(505) 00:11:14.794 fused_ordering(506) 00:11:14.794 fused_ordering(507) 00:11:14.794 fused_ordering(508) 00:11:14.794 fused_ordering(509) 00:11:14.794 fused_ordering(510) 00:11:14.794 fused_ordering(511) 00:11:14.794 fused_ordering(512) 00:11:14.794 fused_ordering(513) 00:11:14.794 fused_ordering(514) 00:11:14.794 fused_ordering(515) 00:11:14.794 fused_ordering(516) 00:11:14.794 fused_ordering(517) 00:11:14.794 fused_ordering(518) 00:11:14.794 fused_ordering(519) 00:11:14.794 fused_ordering(520) 00:11:14.794 fused_ordering(521) 00:11:14.794 fused_ordering(522) 00:11:14.794 fused_ordering(523) 00:11:14.794 fused_ordering(524) 00:11:14.794 fused_ordering(525) 00:11:14.794 fused_ordering(526) 00:11:14.794 fused_ordering(527) 00:11:14.794 fused_ordering(528) 00:11:14.794 fused_ordering(529) 00:11:14.794 fused_ordering(530) 00:11:14.794 fused_ordering(531) 00:11:14.794 fused_ordering(532) 00:11:14.794 fused_ordering(533) 00:11:14.794 fused_ordering(534) 00:11:14.794 fused_ordering(535) 00:11:14.794 fused_ordering(536) 00:11:14.794 fused_ordering(537) 00:11:14.794 fused_ordering(538) 00:11:14.794 fused_ordering(539) 00:11:14.794 fused_ordering(540) 00:11:14.794 fused_ordering(541) 00:11:14.794 fused_ordering(542) 00:11:14.794 fused_ordering(543) 00:11:14.794 fused_ordering(544) 00:11:14.794 fused_ordering(545) 00:11:14.794 fused_ordering(546) 00:11:14.794 fused_ordering(547) 00:11:14.794 fused_ordering(548) 00:11:14.794 fused_ordering(549) 00:11:14.794 fused_ordering(550) 00:11:14.794 fused_ordering(551) 00:11:14.794 fused_ordering(552) 00:11:14.794 fused_ordering(553) 00:11:14.794 fused_ordering(554) 00:11:14.794 fused_ordering(555) 00:11:14.794 fused_ordering(556) 00:11:14.794 fused_ordering(557) 00:11:14.794 fused_ordering(558) 00:11:14.794 fused_ordering(559) 00:11:14.794 fused_ordering(560) 00:11:14.794 fused_ordering(561) 00:11:14.794 fused_ordering(562) 00:11:14.794 fused_ordering(563) 00:11:14.794 fused_ordering(564) 00:11:14.794 fused_ordering(565) 00:11:14.794 fused_ordering(566) 00:11:14.794 fused_ordering(567) 00:11:14.794 fused_ordering(568) 00:11:14.794 fused_ordering(569) 00:11:14.794 fused_ordering(570) 00:11:14.794 fused_ordering(571) 00:11:14.794 fused_ordering(572) 00:11:14.794 fused_ordering(573) 00:11:14.794 fused_ordering(574) 00:11:14.794 fused_ordering(575) 00:11:14.794 fused_ordering(576) 00:11:14.794 fused_ordering(577) 00:11:14.794 fused_ordering(578) 00:11:14.794 fused_ordering(579) 00:11:14.794 fused_ordering(580) 00:11:14.794 fused_ordering(581) 00:11:14.794 fused_ordering(582) 00:11:14.794 fused_ordering(583) 00:11:14.794 fused_ordering(584) 00:11:14.794 fused_ordering(585) 00:11:14.794 fused_ordering(586) 00:11:14.794 fused_ordering(587) 00:11:14.794 fused_ordering(588) 00:11:14.794 fused_ordering(589) 00:11:14.794 fused_ordering(590) 00:11:14.794 fused_ordering(591) 00:11:14.794 fused_ordering(592) 00:11:14.794 fused_ordering(593) 00:11:14.794 fused_ordering(594) 00:11:14.794 fused_ordering(595) 00:11:14.794 fused_ordering(596) 00:11:14.794 fused_ordering(597) 00:11:14.794 fused_ordering(598) 00:11:14.794 fused_ordering(599) 00:11:14.794 fused_ordering(600) 00:11:14.794 fused_ordering(601) 00:11:14.794 fused_ordering(602) 00:11:14.794 fused_ordering(603) 00:11:14.794 fused_ordering(604) 00:11:14.794 fused_ordering(605) 00:11:14.794 fused_ordering(606) 00:11:14.794 fused_ordering(607) 00:11:14.794 fused_ordering(608) 00:11:14.794 fused_ordering(609) 00:11:14.794 fused_ordering(610) 00:11:14.794 fused_ordering(611) 00:11:14.794 fused_ordering(612) 00:11:14.794 fused_ordering(613) 00:11:14.794 fused_ordering(614) 00:11:14.794 fused_ordering(615) 00:11:15.053 fused_ordering(616) 00:11:15.053 fused_ordering(617) 00:11:15.053 fused_ordering(618) 00:11:15.053 fused_ordering(619) 00:11:15.053 fused_ordering(620) 00:11:15.053 fused_ordering(621) 00:11:15.053 fused_ordering(622) 00:11:15.053 fused_ordering(623) 00:11:15.053 fused_ordering(624) 00:11:15.053 fused_ordering(625) 00:11:15.053 fused_ordering(626) 00:11:15.053 fused_ordering(627) 00:11:15.053 fused_ordering(628) 00:11:15.053 fused_ordering(629) 00:11:15.053 fused_ordering(630) 00:11:15.053 fused_ordering(631) 00:11:15.053 fused_ordering(632) 00:11:15.053 fused_ordering(633) 00:11:15.053 fused_ordering(634) 00:11:15.053 fused_ordering(635) 00:11:15.053 fused_ordering(636) 00:11:15.053 fused_ordering(637) 00:11:15.053 fused_ordering(638) 00:11:15.053 fused_ordering(639) 00:11:15.053 fused_ordering(640) 00:11:15.053 fused_ordering(641) 00:11:15.053 fused_ordering(642) 00:11:15.053 fused_ordering(643) 00:11:15.053 fused_ordering(644) 00:11:15.053 fused_ordering(645) 00:11:15.053 fused_ordering(646) 00:11:15.053 fused_ordering(647) 00:11:15.053 fused_ordering(648) 00:11:15.053 fused_ordering(649) 00:11:15.053 fused_ordering(650) 00:11:15.053 fused_ordering(651) 00:11:15.053 fused_ordering(652) 00:11:15.053 fused_ordering(653) 00:11:15.053 fused_ordering(654) 00:11:15.053 fused_ordering(655) 00:11:15.053 fused_ordering(656) 00:11:15.053 fused_ordering(657) 00:11:15.053 fused_ordering(658) 00:11:15.053 fused_ordering(659) 00:11:15.053 fused_ordering(660) 00:11:15.053 fused_ordering(661) 00:11:15.053 fused_ordering(662) 00:11:15.053 fused_ordering(663) 00:11:15.053 fused_ordering(664) 00:11:15.053 fused_ordering(665) 00:11:15.053 fused_ordering(666) 00:11:15.053 fused_ordering(667) 00:11:15.053 fused_ordering(668) 00:11:15.053 fused_ordering(669) 00:11:15.053 fused_ordering(670) 00:11:15.053 fused_ordering(671) 00:11:15.053 fused_ordering(672) 00:11:15.053 fused_ordering(673) 00:11:15.053 fused_ordering(674) 00:11:15.053 fused_ordering(675) 00:11:15.053 fused_ordering(676) 00:11:15.053 fused_ordering(677) 00:11:15.053 fused_ordering(678) 00:11:15.053 fused_ordering(679) 00:11:15.053 fused_ordering(680) 00:11:15.053 fused_ordering(681) 00:11:15.053 fused_ordering(682) 00:11:15.053 fused_ordering(683) 00:11:15.053 fused_ordering(684) 00:11:15.054 fused_ordering(685) 00:11:15.054 fused_ordering(686) 00:11:15.054 fused_ordering(687) 00:11:15.054 fused_ordering(688) 00:11:15.054 fused_ordering(689) 00:11:15.054 fused_ordering(690) 00:11:15.054 fused_ordering(691) 00:11:15.054 fused_ordering(692) 00:11:15.054 fused_ordering(693) 00:11:15.054 fused_ordering(694) 00:11:15.054 fused_ordering(695) 00:11:15.054 fused_ordering(696) 00:11:15.054 fused_ordering(697) 00:11:15.054 fused_ordering(698) 00:11:15.054 fused_ordering(699) 00:11:15.054 fused_ordering(700) 00:11:15.054 fused_ordering(701) 00:11:15.054 fused_ordering(702) 00:11:15.054 fused_ordering(703) 00:11:15.054 fused_ordering(704) 00:11:15.054 fused_ordering(705) 00:11:15.054 fused_ordering(706) 00:11:15.054 fused_ordering(707) 00:11:15.054 fused_ordering(708) 00:11:15.054 fused_ordering(709) 00:11:15.054 fused_ordering(710) 00:11:15.054 fused_ordering(711) 00:11:15.054 fused_ordering(712) 00:11:15.054 fused_ordering(713) 00:11:15.054 fused_ordering(714) 00:11:15.054 fused_ordering(715) 00:11:15.054 fused_ordering(716) 00:11:15.054 fused_ordering(717) 00:11:15.054 fused_ordering(718) 00:11:15.054 fused_ordering(719) 00:11:15.054 fused_ordering(720) 00:11:15.054 fused_ordering(721) 00:11:15.054 fused_ordering(722) 00:11:15.054 fused_ordering(723) 00:11:15.054 fused_ordering(724) 00:11:15.054 fused_ordering(725) 00:11:15.054 fused_ordering(726) 00:11:15.054 fused_ordering(727) 00:11:15.054 fused_ordering(728) 00:11:15.054 fused_ordering(729) 00:11:15.054 fused_ordering(730) 00:11:15.054 fused_ordering(731) 00:11:15.054 fused_ordering(732) 00:11:15.054 fused_ordering(733) 00:11:15.054 fused_ordering(734) 00:11:15.054 fused_ordering(735) 00:11:15.054 fused_ordering(736) 00:11:15.054 fused_ordering(737) 00:11:15.054 fused_ordering(738) 00:11:15.054 fused_ordering(739) 00:11:15.054 fused_ordering(740) 00:11:15.054 fused_ordering(741) 00:11:15.054 fused_ordering(742) 00:11:15.054 fused_ordering(743) 00:11:15.054 fused_ordering(744) 00:11:15.054 fused_ordering(745) 00:11:15.054 fused_ordering(746) 00:11:15.054 fused_ordering(747) 00:11:15.054 fused_ordering(748) 00:11:15.054 fused_ordering(749) 00:11:15.054 fused_ordering(750) 00:11:15.054 fused_ordering(751) 00:11:15.054 fused_ordering(752) 00:11:15.054 fused_ordering(753) 00:11:15.054 fused_ordering(754) 00:11:15.054 fused_ordering(755) 00:11:15.054 fused_ordering(756) 00:11:15.054 fused_ordering(757) 00:11:15.054 fused_ordering(758) 00:11:15.054 fused_ordering(759) 00:11:15.054 fused_ordering(760) 00:11:15.054 fused_ordering(761) 00:11:15.054 fused_ordering(762) 00:11:15.054 fused_ordering(763) 00:11:15.054 fused_ordering(764) 00:11:15.054 fused_ordering(765) 00:11:15.054 fused_ordering(766) 00:11:15.054 fused_ordering(767) 00:11:15.054 fused_ordering(768) 00:11:15.054 fused_ordering(769) 00:11:15.054 fused_ordering(770) 00:11:15.054 fused_ordering(771) 00:11:15.054 fused_ordering(772) 00:11:15.054 fused_ordering(773) 00:11:15.054 fused_ordering(774) 00:11:15.054 fused_ordering(775) 00:11:15.054 fused_ordering(776) 00:11:15.054 fused_ordering(777) 00:11:15.054 fused_ordering(778) 00:11:15.054 fused_ordering(779) 00:11:15.054 fused_ordering(780) 00:11:15.054 fused_ordering(781) 00:11:15.054 fused_ordering(782) 00:11:15.054 fused_ordering(783) 00:11:15.054 fused_ordering(784) 00:11:15.054 fused_ordering(785) 00:11:15.054 fused_ordering(786) 00:11:15.054 fused_ordering(787) 00:11:15.054 fused_ordering(788) 00:11:15.054 fused_ordering(789) 00:11:15.054 fused_ordering(790) 00:11:15.054 fused_ordering(791) 00:11:15.054 fused_ordering(792) 00:11:15.054 fused_ordering(793) 00:11:15.054 fused_ordering(794) 00:11:15.054 fused_ordering(795) 00:11:15.054 fused_ordering(796) 00:11:15.054 fused_ordering(797) 00:11:15.054 fused_ordering(798) 00:11:15.054 fused_ordering(799) 00:11:15.054 fused_ordering(800) 00:11:15.054 fused_ordering(801) 00:11:15.054 fused_ordering(802) 00:11:15.054 fused_ordering(803) 00:11:15.054 fused_ordering(804) 00:11:15.054 fused_ordering(805) 00:11:15.054 fused_ordering(806) 00:11:15.054 fused_ordering(807) 00:11:15.054 fused_ordering(808) 00:11:15.054 fused_ordering(809) 00:11:15.054 fused_ordering(810) 00:11:15.054 fused_ordering(811) 00:11:15.054 fused_ordering(812) 00:11:15.054 fused_ordering(813) 00:11:15.054 fused_ordering(814) 00:11:15.054 fused_ordering(815) 00:11:15.054 fused_ordering(816) 00:11:15.054 fused_ordering(817) 00:11:15.054 fused_ordering(818) 00:11:15.054 fused_ordering(819) 00:11:15.054 fused_ordering(820) 00:11:15.622 fused_ordering(821) 00:11:15.622 fused_ordering(822) 00:11:15.622 fused_ordering(823) 00:11:15.622 fused_ordering(824) 00:11:15.622 fused_ordering(825) 00:11:15.622 fused_ordering(826) 00:11:15.622 fused_ordering(827) 00:11:15.622 fused_ordering(828) 00:11:15.622 fused_ordering(829) 00:11:15.622 fused_ordering(830) 00:11:15.622 fused_ordering(831) 00:11:15.622 fused_ordering(832) 00:11:15.622 fused_ordering(833) 00:11:15.622 fused_ordering(834) 00:11:15.622 fused_ordering(835) 00:11:15.622 fused_ordering(836) 00:11:15.622 fused_ordering(837) 00:11:15.622 fused_ordering(838) 00:11:15.622 fused_ordering(839) 00:11:15.622 fused_ordering(840) 00:11:15.622 fused_ordering(841) 00:11:15.622 fused_ordering(842) 00:11:15.622 fused_ordering(843) 00:11:15.622 fused_ordering(844) 00:11:15.622 fused_ordering(845) 00:11:15.622 fused_ordering(846) 00:11:15.622 fused_ordering(847) 00:11:15.622 fused_ordering(848) 00:11:15.622 fused_ordering(849) 00:11:15.622 fused_ordering(850) 00:11:15.622 fused_ordering(851) 00:11:15.622 fused_ordering(852) 00:11:15.622 fused_ordering(853) 00:11:15.622 fused_ordering(854) 00:11:15.622 fused_ordering(855) 00:11:15.622 fused_ordering(856) 00:11:15.622 fused_ordering(857) 00:11:15.622 fused_ordering(858) 00:11:15.622 fused_ordering(859) 00:11:15.622 fused_ordering(860) 00:11:15.622 fused_ordering(861) 00:11:15.622 fused_ordering(862) 00:11:15.622 fused_ordering(863) 00:11:15.622 fused_ordering(864) 00:11:15.622 fused_ordering(865) 00:11:15.622 fused_ordering(866) 00:11:15.622 fused_ordering(867) 00:11:15.622 fused_ordering(868) 00:11:15.622 fused_ordering(869) 00:11:15.622 fused_ordering(870) 00:11:15.622 fused_ordering(871) 00:11:15.622 fused_ordering(872) 00:11:15.622 fused_ordering(873) 00:11:15.622 fused_ordering(874) 00:11:15.622 fused_ordering(875) 00:11:15.622 fused_ordering(876) 00:11:15.622 fused_ordering(877) 00:11:15.622 fused_ordering(878) 00:11:15.622 fused_ordering(879) 00:11:15.622 fused_ordering(880) 00:11:15.622 fused_ordering(881) 00:11:15.622 fused_ordering(882) 00:11:15.622 fused_ordering(883) 00:11:15.622 fused_ordering(884) 00:11:15.622 fused_ordering(885) 00:11:15.622 fused_ordering(886) 00:11:15.622 fused_ordering(887) 00:11:15.622 fused_ordering(888) 00:11:15.622 fused_ordering(889) 00:11:15.622 fused_ordering(890) 00:11:15.622 fused_ordering(891) 00:11:15.622 fused_ordering(892) 00:11:15.622 fused_ordering(893) 00:11:15.622 fused_ordering(894) 00:11:15.622 fused_ordering(895) 00:11:15.622 fused_ordering(896) 00:11:15.622 fused_ordering(897) 00:11:15.622 fused_ordering(898) 00:11:15.622 fused_ordering(899) 00:11:15.622 fused_ordering(900) 00:11:15.622 fused_ordering(901) 00:11:15.622 fused_ordering(902) 00:11:15.622 fused_ordering(903) 00:11:15.623 fused_ordering(904) 00:11:15.623 fused_ordering(905) 00:11:15.623 fused_ordering(906) 00:11:15.623 fused_ordering(907) 00:11:15.623 fused_ordering(908) 00:11:15.623 fused_ordering(909) 00:11:15.623 fused_ordering(910) 00:11:15.623 fused_ordering(911) 00:11:15.623 fused_ordering(912) 00:11:15.623 fused_ordering(913) 00:11:15.623 fused_ordering(914) 00:11:15.623 fused_ordering(915) 00:11:15.623 fused_ordering(916) 00:11:15.623 fused_ordering(917) 00:11:15.623 fused_ordering(918) 00:11:15.623 fused_ordering(919) 00:11:15.623 fused_ordering(920) 00:11:15.623 fused_ordering(921) 00:11:15.623 fused_ordering(922) 00:11:15.623 fused_ordering(923) 00:11:15.623 fused_ordering(924) 00:11:15.623 fused_ordering(925) 00:11:15.623 fused_ordering(926) 00:11:15.623 fused_ordering(927) 00:11:15.623 fused_ordering(928) 00:11:15.623 fused_ordering(929) 00:11:15.623 fused_ordering(930) 00:11:15.623 fused_ordering(931) 00:11:15.623 fused_ordering(932) 00:11:15.623 fused_ordering(933) 00:11:15.623 fused_ordering(934) 00:11:15.623 fused_ordering(935) 00:11:15.623 fused_ordering(936) 00:11:15.623 fused_ordering(937) 00:11:15.623 fused_ordering(938) 00:11:15.623 fused_ordering(939) 00:11:15.623 fused_ordering(940) 00:11:15.623 fused_ordering(941) 00:11:15.623 fused_ordering(942) 00:11:15.623 fused_ordering(943) 00:11:15.623 fused_ordering(944) 00:11:15.623 fused_ordering(945) 00:11:15.623 fused_ordering(946) 00:11:15.623 fused_ordering(947) 00:11:15.623 fused_ordering(948) 00:11:15.623 fused_ordering(949) 00:11:15.623 fused_ordering(950) 00:11:15.623 fused_ordering(951) 00:11:15.623 fused_ordering(952) 00:11:15.623 fused_ordering(953) 00:11:15.623 fused_ordering(954) 00:11:15.623 fused_ordering(955) 00:11:15.623 fused_ordering(956) 00:11:15.623 fused_ordering(957) 00:11:15.623 fused_ordering(958) 00:11:15.623 fused_ordering(959) 00:11:15.623 fused_ordering(960) 00:11:15.623 fused_ordering(961) 00:11:15.623 fused_ordering(962) 00:11:15.623 fused_ordering(963) 00:11:15.623 fused_ordering(964) 00:11:15.623 fused_ordering(965) 00:11:15.623 fused_ordering(966) 00:11:15.623 fused_ordering(967) 00:11:15.623 fused_ordering(968) 00:11:15.623 fused_ordering(969) 00:11:15.623 fused_ordering(970) 00:11:15.623 fused_ordering(971) 00:11:15.623 fused_ordering(972) 00:11:15.623 fused_ordering(973) 00:11:15.623 fused_ordering(974) 00:11:15.623 fused_ordering(975) 00:11:15.623 fused_ordering(976) 00:11:15.623 fused_ordering(977) 00:11:15.623 fused_ordering(978) 00:11:15.623 fused_ordering(979) 00:11:15.623 fused_ordering(980) 00:11:15.623 fused_ordering(981) 00:11:15.623 fused_ordering(982) 00:11:15.623 fused_ordering(983) 00:11:15.623 fused_ordering(984) 00:11:15.623 fused_ordering(985) 00:11:15.623 fused_ordering(986) 00:11:15.623 fused_ordering(987) 00:11:15.623 fused_ordering(988) 00:11:15.623 fused_ordering(989) 00:11:15.623 fused_ordering(990) 00:11:15.623 fused_ordering(991) 00:11:15.623 fused_ordering(992) 00:11:15.623 fused_ordering(993) 00:11:15.623 fused_ordering(994) 00:11:15.623 fused_ordering(995) 00:11:15.623 fused_ordering(996) 00:11:15.623 fused_ordering(997) 00:11:15.623 fused_ordering(998) 00:11:15.623 fused_ordering(999) 00:11:15.623 fused_ordering(1000) 00:11:15.623 fused_ordering(1001) 00:11:15.623 fused_ordering(1002) 00:11:15.623 fused_ordering(1003) 00:11:15.623 fused_ordering(1004) 00:11:15.623 fused_ordering(1005) 00:11:15.623 fused_ordering(1006) 00:11:15.623 fused_ordering(1007) 00:11:15.623 fused_ordering(1008) 00:11:15.623 fused_ordering(1009) 00:11:15.623 fused_ordering(1010) 00:11:15.623 fused_ordering(1011) 00:11:15.623 fused_ordering(1012) 00:11:15.623 fused_ordering(1013) 00:11:15.623 fused_ordering(1014) 00:11:15.623 fused_ordering(1015) 00:11:15.623 fused_ordering(1016) 00:11:15.623 fused_ordering(1017) 00:11:15.623 fused_ordering(1018) 00:11:15.623 fused_ordering(1019) 00:11:15.623 fused_ordering(1020) 00:11:15.623 fused_ordering(1021) 00:11:15.623 fused_ordering(1022) 00:11:15.623 fused_ordering(1023) 00:11:15.623 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:15.623 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:15.623 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:15.623 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:11:15.623 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:15.623 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:11:15.623 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:15.623 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:15.623 rmmod nvme_tcp 00:11:15.623 rmmod nvme_fabrics 00:11:15.882 rmmod nvme_keyring 00:11:15.882 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:15.882 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:11:15.882 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:11:15.882 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 75454 ']' 00:11:15.882 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 75454 00:11:15.882 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 75454 ']' 00:11:15.882 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 75454 00:11:15.882 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:11:15.882 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:15.882 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75454 00:11:15.882 killing process with pid 75454 00:11:15.882 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:15.882 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:15.882 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75454' 00:11:15.882 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 75454 00:11:15.882 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 75454 00:11:15.882 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:15.882 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:15.882 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:15.882 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:11:16.141 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:11:16.141 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:16.141 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:11:16.141 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:16.141 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:16.141 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:16.141 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:16.141 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:16.141 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:16.141 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:16.141 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:16.141 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:16.141 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:16.141 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:16.141 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:16.141 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:16.141 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:16.141 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:16.141 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:16.141 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.141 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.141 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.141 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@300 -- # return 0 00:11:16.141 00:11:16.141 real 0m3.821s 00:11:16.141 user 0m4.027s 00:11:16.141 sys 0m1.619s 00:11:16.141 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.141 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:16.141 ************************************ 00:11:16.141 END TEST nvmf_fused_ordering 00:11:16.141 ************************************ 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:16.401 ************************************ 00:11:16.401 START TEST nvmf_ns_masking 00:11:16.401 ************************************ 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:16.401 * Looking for test storage... 00:11:16.401 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:11:16.401 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:16.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.402 --rc genhtml_branch_coverage=1 00:11:16.402 --rc genhtml_function_coverage=1 00:11:16.402 --rc genhtml_legend=1 00:11:16.402 --rc geninfo_all_blocks=1 00:11:16.402 --rc geninfo_unexecuted_blocks=1 00:11:16.402 00:11:16.402 ' 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:16.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.402 --rc genhtml_branch_coverage=1 00:11:16.402 --rc genhtml_function_coverage=1 00:11:16.402 --rc genhtml_legend=1 00:11:16.402 --rc geninfo_all_blocks=1 00:11:16.402 --rc geninfo_unexecuted_blocks=1 00:11:16.402 00:11:16.402 ' 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:16.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.402 --rc genhtml_branch_coverage=1 00:11:16.402 --rc genhtml_function_coverage=1 00:11:16.402 --rc genhtml_legend=1 00:11:16.402 --rc geninfo_all_blocks=1 00:11:16.402 --rc geninfo_unexecuted_blocks=1 00:11:16.402 00:11:16.402 ' 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:16.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.402 --rc genhtml_branch_coverage=1 00:11:16.402 --rc genhtml_function_coverage=1 00:11:16.402 --rc genhtml_legend=1 00:11:16.402 --rc geninfo_all_blocks=1 00:11:16.402 --rc geninfo_unexecuted_blocks=1 00:11:16.402 00:11:16.402 ' 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:16.402 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:16.402 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=7b71ebc4-e0dd-41c9-acf1-9aa999c89de8 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=4dee6971-7600-4bf4-8a2b-933a27224650 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=006f55bf-10f8-49a6-b620-399d22cf7a35 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:16.662 Cannot find device "nvmf_init_br" 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:16.662 Cannot find device "nvmf_init_br2" 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:16.662 Cannot find device "nvmf_tgt_br" 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # true 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:16.662 Cannot find device "nvmf_tgt_br2" 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # true 00:11:16.662 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:16.662 Cannot find device "nvmf_init_br" 00:11:16.662 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # true 00:11:16.662 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:16.662 Cannot find device "nvmf_init_br2" 00:11:16.662 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # true 00:11:16.662 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:16.662 Cannot find device "nvmf_tgt_br" 00:11:16.662 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # true 00:11:16.662 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:16.662 Cannot find device "nvmf_tgt_br2" 00:11:16.662 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # true 00:11:16.662 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:16.662 Cannot find device "nvmf_br" 00:11:16.662 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # true 00:11:16.662 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:16.662 Cannot find device "nvmf_init_if" 00:11:16.662 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # true 00:11:16.662 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:16.662 Cannot find device "nvmf_init_if2" 00:11:16.662 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # true 00:11:16.662 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:16.662 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:16.662 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # true 00:11:16.662 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:16.662 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:16.662 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # true 00:11:16.662 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:16.662 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:16.662 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:16.662 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:16.662 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:16.662 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:16.662 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:16.923 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:16.923 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.122 ms 00:11:16.923 00:11:16.923 --- 10.0.0.3 ping statistics --- 00:11:16.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.923 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:16.923 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:16.923 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.088 ms 00:11:16.923 00:11:16.923 --- 10.0.0.4 ping statistics --- 00:11:16.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.923 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:16.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:16.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:11:16.923 00:11:16.923 --- 10.0.0.1 ping statistics --- 00:11:16.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.923 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:16.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:16.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:11:16.923 00:11:16.923 --- 10.0.0.2 ping statistics --- 00:11:16.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.923 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@461 -- # return 0 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=75738 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 75738 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 75738 ']' 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.923 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:17.195 [2024-11-19 17:10:09.458711] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:11:17.196 [2024-11-19 17:10:09.459400] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.196 [2024-11-19 17:10:09.608039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.196 [2024-11-19 17:10:09.668965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.196 [2024-11-19 17:10:09.669282] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.196 [2024-11-19 17:10:09.669301] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:17.196 [2024-11-19 17:10:09.669311] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:17.196 [2024-11-19 17:10:09.669318] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.196 [2024-11-19 17:10:09.669731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.137 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:18.137 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:11:18.137 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:18.137 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:18.137 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:18.137 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.137 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:18.137 [2024-11-19 17:10:10.631674] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.137 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:18.137 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:18.137 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:18.395 Malloc1 00:11:18.395 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:18.654 Malloc2 00:11:18.916 17:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:19.176 17:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:19.434 17:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:19.434 [2024-11-19 17:10:11.939254] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:19.434 17:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:19.434 17:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 006f55bf-10f8-49a6-b620-399d22cf7a35 -a 10.0.0.3 -s 4420 -i 4 00:11:19.692 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:19.692 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:19.692 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:19.692 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:19.692 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:21.593 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:21.593 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:21.593 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:21.593 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:21.593 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:21.593 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:21.593 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:21.593 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:21.851 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:21.851 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:21.851 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:21.851 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:21.851 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:21.851 [ 0]:0x1 00:11:21.851 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:21.851 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:21.851 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=20874e05f91742bcbbe3ecaa3bb44fe8 00:11:21.851 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 20874e05f91742bcbbe3ecaa3bb44fe8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:21.851 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:22.110 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:22.110 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:22.110 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:22.110 [ 0]:0x1 00:11:22.110 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:22.110 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:22.110 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=20874e05f91742bcbbe3ecaa3bb44fe8 00:11:22.110 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 20874e05f91742bcbbe3ecaa3bb44fe8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:22.110 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:22.110 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:22.110 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:22.110 [ 1]:0x2 00:11:22.110 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:22.110 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:22.369 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3212e80055f54ae0b06ea8a234986428 00:11:22.369 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3212e80055f54ae0b06ea8a234986428 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:22.369 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:22.369 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:22.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.369 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.628 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:22.886 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:22.886 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 006f55bf-10f8-49a6-b620-399d22cf7a35 -a 10.0.0.3 -s 4420 -i 4 00:11:23.144 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:23.144 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:23.144 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:23.144 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:11:23.144 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:11:23.144 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:25.059 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:25.059 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:25.059 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:25.059 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:25.059 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:25.059 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:25.059 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:25.059 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:25.059 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:25.059 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:25.059 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:25.059 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:25.059 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:25.059 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:25.059 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:25.059 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:25.059 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:25.059 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:25.059 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:25.059 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:25.059 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:25.059 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:25.317 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:25.317 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:25.317 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:25.317 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:25.317 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:25.317 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:25.317 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:25.317 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:25.317 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:25.317 [ 0]:0x2 00:11:25.318 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:25.318 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:25.318 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3212e80055f54ae0b06ea8a234986428 00:11:25.318 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3212e80055f54ae0b06ea8a234986428 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:25.318 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:25.575 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:25.575 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:25.575 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:25.575 [ 0]:0x1 00:11:25.575 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:25.575 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:25.575 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=20874e05f91742bcbbe3ecaa3bb44fe8 00:11:25.575 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 20874e05f91742bcbbe3ecaa3bb44fe8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:25.575 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:25.576 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:25.576 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:25.576 [ 1]:0x2 00:11:25.576 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:25.576 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:25.833 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3212e80055f54ae0b06ea8a234986428 00:11:25.833 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3212e80055f54ae0b06ea8a234986428 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:25.833 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:26.091 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:26.091 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:26.091 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:26.091 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:26.091 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.091 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:26.091 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.091 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:26.091 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:26.091 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:26.091 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:26.091 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:26.091 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:26.091 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:26.091 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:26.091 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:26.091 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:26.092 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:26.092 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:26.092 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:26.092 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:26.092 [ 0]:0x2 00:11:26.092 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:26.092 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:26.092 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3212e80055f54ae0b06ea8a234986428 00:11:26.092 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3212e80055f54ae0b06ea8a234986428 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:26.092 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:26.092 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:26.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.350 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:26.609 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:26.609 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 006f55bf-10f8-49a6-b620-399d22cf7a35 -a 10.0.0.3 -s 4420 -i 4 00:11:26.609 17:10:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:26.609 17:10:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:26.609 17:10:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:26.609 17:10:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:11:26.609 17:10:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:11:26.609 17:10:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:28.514 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:28.514 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:28.514 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:28.774 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:11:28.774 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:28.774 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:28.774 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:28.774 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:28.774 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:28.774 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:28.774 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:28.774 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:28.774 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:28.774 [ 0]:0x1 00:11:28.774 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:28.774 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:28.774 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=20874e05f91742bcbbe3ecaa3bb44fe8 00:11:28.774 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 20874e05f91742bcbbe3ecaa3bb44fe8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:28.774 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:28.774 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:28.774 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:28.774 [ 1]:0x2 00:11:28.774 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:28.774 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:28.774 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3212e80055f54ae0b06ea8a234986428 00:11:28.774 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3212e80055f54ae0b06ea8a234986428 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:28.774 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:29.033 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:29.033 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:29.033 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:29.033 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:29.033 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.033 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:29.033 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.033 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:29.033 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:29.033 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:29.033 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:29.033 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:29.292 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:29.292 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:29.292 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:29.292 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:29.292 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:29.292 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:29.292 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:29.292 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:29.292 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:29.292 [ 0]:0x2 00:11:29.292 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:29.292 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:29.292 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3212e80055f54ae0b06ea8a234986428 00:11:29.292 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3212e80055f54ae0b06ea8a234986428 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:29.292 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:29.292 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:29.292 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:29.292 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:29.292 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.292 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:29.293 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.293 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:29.293 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.293 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:29.293 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:29.293 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:29.552 [2024-11-19 17:10:21.909790] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:29.552 2024/11/19 17:10:21 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:11:29.552 request: 00:11:29.552 { 00:11:29.552 "method": "nvmf_ns_remove_host", 00:11:29.552 "params": { 00:11:29.552 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:29.552 "nsid": 2, 00:11:29.552 "host": "nqn.2016-06.io.spdk:host1" 00:11:29.552 } 00:11:29.552 } 00:11:29.552 Got JSON-RPC error response 00:11:29.552 GoRPCClient: error on JSON-RPC call 00:11:29.552 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:29.552 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:29.552 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:29.552 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:29.552 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:29.552 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:29.552 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:29.552 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:29.552 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.552 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:29.552 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.552 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:29.552 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:29.552 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:29.552 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:29.552 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:29.552 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:29.552 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:29.552 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:29.552 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:29.552 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:29.552 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:29.552 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:29.552 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:29.552 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:29.552 [ 0]:0x2 00:11:29.552 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:29.552 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:29.552 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3212e80055f54ae0b06ea8a234986428 00:11:29.552 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3212e80055f54ae0b06ea8a234986428 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:29.552 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:29.552 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:29.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:29.811 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=76121 00:11:29.811 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:29.811 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.811 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 76121 /var/tmp/host.sock 00:11:29.811 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 76121 ']' 00:11:29.811 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:11:29.811 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.811 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:29.811 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.811 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:29.811 [2024-11-19 17:10:22.217599] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:11:29.811 [2024-11-19 17:10:22.217714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76121 ] 00:11:30.070 [2024-11-19 17:10:22.367638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.070 [2024-11-19 17:10:22.426860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.328 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.328 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:11:30.329 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:30.588 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:30.847 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 7b71ebc4-e0dd-41c9-acf1-9aa999c89de8 00:11:30.847 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:30.847 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7B71EBC4E0DD41C9ACF19AA999C89DE8 -i 00:11:31.106 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 4dee6971-7600-4bf4-8a2b-933a27224650 00:11:31.106 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:31.106 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 4DEE697176004BF48A2B933A27224650 -i 00:11:31.673 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:31.932 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:32.191 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:32.191 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:32.450 nvme0n1 00:11:32.450 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:32.450 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:32.708 nvme1n2 00:11:32.967 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:32.967 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:32.967 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:32.967 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:32.967 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:33.227 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:33.227 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:33.227 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:33.227 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:33.487 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 7b71ebc4-e0dd-41c9-acf1-9aa999c89de8 == \7\b\7\1\e\b\c\4\-\e\0\d\d\-\4\1\c\9\-\a\c\f\1\-\9\a\a\9\9\9\c\8\9\d\e\8 ]] 00:11:33.487 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:33.487 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:33.487 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:33.746 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 4dee6971-7600-4bf4-8a2b-933a27224650 == \4\d\e\e\6\9\7\1\-\7\6\0\0\-\4\b\f\4\-\8\a\2\b\-\9\3\3\a\2\7\2\2\4\6\5\0 ]] 00:11:33.746 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.007 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:34.269 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 7b71ebc4-e0dd-41c9-acf1-9aa999c89de8 00:11:34.269 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:34.269 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7B71EBC4E0DD41C9ACF19AA999C89DE8 00:11:34.269 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:34.269 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7B71EBC4E0DD41C9ACF19AA999C89DE8 00:11:34.269 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:34.269 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:34.269 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:34.269 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:34.269 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:34.269 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:34.269 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:34.269 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:34.269 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7B71EBC4E0DD41C9ACF19AA999C89DE8 00:11:34.528 [2024-11-19 17:10:27.004684] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:11:34.528 [2024-11-19 17:10:27.004772] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:11:34.528 [2024-11-19 17:10:27.004787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.528 2024/11/19 17:10:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:invalid nguid:7B71EBC4E0DD41C9ACF19AA999C89DE8 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.528 request: 00:11:34.528 { 00:11:34.528 "method": "nvmf_subsystem_add_ns", 00:11:34.528 "params": { 00:11:34.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:34.528 "namespace": { 00:11:34.528 "bdev_name": "invalid", 00:11:34.528 "nsid": 1, 00:11:34.528 "nguid": "7B71EBC4E0DD41C9ACF19AA999C89DE8", 00:11:34.528 "no_auto_visible": false 00:11:34.528 } 00:11:34.528 } 00:11:34.528 } 00:11:34.528 Got JSON-RPC error response 00:11:34.528 GoRPCClient: error on JSON-RPC call 00:11:34.528 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:34.528 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:34.528 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:34.528 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:34.528 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 7b71ebc4-e0dd-41c9-acf1-9aa999c89de8 00:11:34.528 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:34.528 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7B71EBC4E0DD41C9ACF19AA999C89DE8 -i 00:11:35.095 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:11:37.000 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:11:37.000 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:37.000 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:11:37.258 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:11:37.258 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 76121 00:11:37.258 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 76121 ']' 00:11:37.258 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 76121 00:11:37.258 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:11:37.258 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:37.258 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76121 00:11:37.258 killing process with pid 76121 00:11:37.258 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:37.258 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:37.258 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76121' 00:11:37.258 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 76121 00:11:37.258 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 76121 00:11:37.823 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:38.082 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:11:38.082 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:11:38.082 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:38.082 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:11:38.082 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:38.082 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:11:38.082 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:38.082 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:38.082 rmmod nvme_tcp 00:11:38.082 rmmod nvme_fabrics 00:11:38.082 rmmod nvme_keyring 00:11:38.082 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:38.082 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:11:38.082 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:11:38.082 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 75738 ']' 00:11:38.082 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 75738 00:11:38.082 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 75738 ']' 00:11:38.082 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 75738 00:11:38.082 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:11:38.082 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:38.082 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75738 00:11:38.082 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:38.082 killing process with pid 75738 00:11:38.082 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:38.082 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75738' 00:11:38.082 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 75738 00:11:38.082 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 75738 00:11:38.341 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:38.341 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:38.341 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:38.341 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:11:38.341 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:11:38.341 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:11:38.341 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:38.341 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:38.341 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:38.341 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:38.341 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:38.599 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:38.599 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:38.599 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:38.599 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:38.599 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:38.599 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:38.599 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:38.599 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:38.599 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:38.600 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:38.600 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:38.600 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:38.600 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.600 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:38.600 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.600 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@300 -- # return 0 00:11:38.600 00:11:38.600 real 0m22.362s 00:11:38.600 user 0m37.618s 00:11:38.600 sys 0m3.510s 00:11:38.600 ************************************ 00:11:38.600 END TEST nvmf_ns_masking 00:11:38.600 ************************************ 00:11:38.600 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.600 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:38.600 17:10:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:11:38.600 17:10:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:11:38.600 17:10:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:38.600 17:10:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:38.600 17:10:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.600 17:10:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:38.860 ************************************ 00:11:38.860 START TEST nvmf_auth_target 00:11:38.860 ************************************ 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:38.860 * Looking for test storage... 00:11:38.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:38.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.860 --rc genhtml_branch_coverage=1 00:11:38.860 --rc genhtml_function_coverage=1 00:11:38.860 --rc genhtml_legend=1 00:11:38.860 --rc geninfo_all_blocks=1 00:11:38.860 --rc geninfo_unexecuted_blocks=1 00:11:38.860 00:11:38.860 ' 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:38.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.860 --rc genhtml_branch_coverage=1 00:11:38.860 --rc genhtml_function_coverage=1 00:11:38.860 --rc genhtml_legend=1 00:11:38.860 --rc geninfo_all_blocks=1 00:11:38.860 --rc geninfo_unexecuted_blocks=1 00:11:38.860 00:11:38.860 ' 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:38.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.860 --rc genhtml_branch_coverage=1 00:11:38.860 --rc genhtml_function_coverage=1 00:11:38.860 --rc genhtml_legend=1 00:11:38.860 --rc geninfo_all_blocks=1 00:11:38.860 --rc geninfo_unexecuted_blocks=1 00:11:38.860 00:11:38.860 ' 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:38.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.860 --rc genhtml_branch_coverage=1 00:11:38.860 --rc genhtml_function_coverage=1 00:11:38.860 --rc genhtml_legend=1 00:11:38.860 --rc geninfo_all_blocks=1 00:11:38.860 --rc geninfo_unexecuted_blocks=1 00:11:38.860 00:11:38.860 ' 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:38.860 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:38.861 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:38.861 Cannot find device "nvmf_init_br" 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:38.861 Cannot find device "nvmf_init_br2" 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:38.861 Cannot find device "nvmf_tgt_br" 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:11:38.861 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:39.120 Cannot find device "nvmf_tgt_br2" 00:11:39.120 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:11:39.120 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:39.120 Cannot find device "nvmf_init_br" 00:11:39.120 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:11:39.120 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:39.120 Cannot find device "nvmf_init_br2" 00:11:39.120 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:11:39.120 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:39.120 Cannot find device "nvmf_tgt_br" 00:11:39.120 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:11:39.120 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:39.120 Cannot find device "nvmf_tgt_br2" 00:11:39.120 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:11:39.120 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:39.120 Cannot find device "nvmf_br" 00:11:39.120 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:11:39.120 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:39.120 Cannot find device "nvmf_init_if" 00:11:39.120 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:11:39.120 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:39.120 Cannot find device "nvmf_init_if2" 00:11:39.120 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:11:39.120 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:39.120 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:39.120 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:11:39.120 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:39.120 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:39.120 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:11:39.120 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:39.120 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:39.120 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:39.121 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:39.121 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:39.121 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:39.121 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:39.121 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:39.121 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:39.121 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:39.121 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:39.121 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:39.121 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:39.121 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:39.121 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:39.121 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:39.121 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:39.121 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:39.121 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:39.121 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:39.121 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:39.121 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:39.121 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:39.121 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:39.380 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:39.380 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:11:39.380 00:11:39.380 --- 10.0.0.3 ping statistics --- 00:11:39.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.380 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:39.380 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:39.380 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:11:39.380 00:11:39.380 --- 10.0.0.4 ping statistics --- 00:11:39.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.380 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:39.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:39.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:11:39.380 00:11:39.380 --- 10.0.0.1 ping statistics --- 00:11:39.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.380 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:39.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:39.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:11:39.380 00:11:39.380 --- 10.0.0.2 ping statistics --- 00:11:39.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.380 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=76609 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 76609 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 76609 ']' 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.380 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.758 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.758 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:40.758 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:40.758 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=76653 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=557abaaa5526cd3dfecf38fa1c7b4dc997f388a198c22ff3 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.0Zs 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 557abaaa5526cd3dfecf38fa1c7b4dc997f388a198c22ff3 0 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 557abaaa5526cd3dfecf38fa1c7b4dc997f388a198c22ff3 0 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=557abaaa5526cd3dfecf38fa1c7b4dc997f388a198c22ff3 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.0Zs 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.0Zs 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.0Zs 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c05bd78ae2e1145f65ca2bb7cc4eea1e725d5536ba132557d191004d9769a9a3 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.XBl 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c05bd78ae2e1145f65ca2bb7cc4eea1e725d5536ba132557d191004d9769a9a3 3 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c05bd78ae2e1145f65ca2bb7cc4eea1e725d5536ba132557d191004d9769a9a3 3 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c05bd78ae2e1145f65ca2bb7cc4eea1e725d5536ba132557d191004d9769a9a3 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:11:40.759 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.XBl 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.XBl 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.XBl 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=15507b7e4c21637aca647b7472436155 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Dcd 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 15507b7e4c21637aca647b7472436155 1 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 15507b7e4c21637aca647b7472436155 1 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=15507b7e4c21637aca647b7472436155 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Dcd 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Dcd 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Dcd 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=932d691c14e301474c886387a31238643672e27e2599490f 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.uri 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 932d691c14e301474c886387a31238643672e27e2599490f 2 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 932d691c14e301474c886387a31238643672e27e2599490f 2 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=932d691c14e301474c886387a31238643672e27e2599490f 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.uri 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.uri 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.uri 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=db88ca33ccc2c9bdac0d8a803ecf873de7ae7c53244faed8 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.9T2 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key db88ca33ccc2c9bdac0d8a803ecf873de7ae7c53244faed8 2 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 db88ca33ccc2c9bdac0d8a803ecf873de7ae7c53244faed8 2 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=db88ca33ccc2c9bdac0d8a803ecf873de7ae7c53244faed8 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:11:40.759 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:40.760 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.9T2 00:11:40.760 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.9T2 00:11:40.760 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.9T2 00:11:40.760 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:11:40.760 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:40.760 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:40.760 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:40.760 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:11:40.760 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:11:40.760 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:40.760 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ef35f8d0252de0ca806d515a2143b581 00:11:40.760 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:11:40.760 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.7FI 00:11:40.760 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ef35f8d0252de0ca806d515a2143b581 1 00:11:40.760 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ef35f8d0252de0ca806d515a2143b581 1 00:11:40.760 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:40.760 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:40.760 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ef35f8d0252de0ca806d515a2143b581 00:11:40.760 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:11:40.760 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:41.019 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.7FI 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.7FI 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.7FI 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4bb5d6b88a1aa7a46a7ebd54b473f327f11426f36c9bca442594d0fa495fe596 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.obS 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4bb5d6b88a1aa7a46a7ebd54b473f327f11426f36c9bca442594d0fa495fe596 3 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4bb5d6b88a1aa7a46a7ebd54b473f327f11426f36c9bca442594d0fa495fe596 3 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4bb5d6b88a1aa7a46a7ebd54b473f327f11426f36c9bca442594d0fa495fe596 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.obS 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.obS 00:11:41.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.obS 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 76609 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 76609 ']' 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.020 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:41.278 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.278 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:41.278 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 76653 /var/tmp/host.sock 00:11:41.278 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 76653 ']' 00:11:41.278 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:11:41.278 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.278 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:41.278 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.278 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.536 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.537 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:41.537 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:11:41.537 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.537 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.537 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.537 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:41.537 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.0Zs 00:11:41.537 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.537 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.537 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.537 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.0Zs 00:11:41.537 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.0Zs 00:11:41.796 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.XBl ]] 00:11:41.796 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.XBl 00:11:41.796 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.796 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.796 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.796 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.XBl 00:11:41.796 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.XBl 00:11:42.364 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:42.364 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Dcd 00:11:42.364 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.364 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.364 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.364 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Dcd 00:11:42.364 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Dcd 00:11:42.623 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.uri ]] 00:11:42.623 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uri 00:11:42.623 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.623 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.623 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.623 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uri 00:11:42.623 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uri 00:11:42.881 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:42.881 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.9T2 00:11:42.881 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.881 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.881 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.881 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.9T2 00:11:42.882 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.9T2 00:11:43.140 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.7FI ]] 00:11:43.140 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7FI 00:11:43.140 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.140 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.140 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.140 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7FI 00:11:43.140 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7FI 00:11:43.399 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:43.399 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.obS 00:11:43.399 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.399 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.399 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.399 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.obS 00:11:43.399 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.obS 00:11:43.657 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:11:43.657 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:43.657 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:43.657 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:43.657 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:43.657 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:43.916 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:11:43.916 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:43.916 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:43.916 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:43.916 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:43.916 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.916 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.916 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.916 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.916 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.916 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.916 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.916 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.175 00:11:44.175 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:44.175 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:44.175 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.434 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.434 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.434 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.434 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.434 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.434 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:44.434 { 00:11:44.434 "auth": { 00:11:44.434 "dhgroup": "null", 00:11:44.434 "digest": "sha256", 00:11:44.434 "state": "completed" 00:11:44.434 }, 00:11:44.434 "cntlid": 1, 00:11:44.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:11:44.434 "listen_address": { 00:11:44.434 "adrfam": "IPv4", 00:11:44.434 "traddr": "10.0.0.3", 00:11:44.434 "trsvcid": "4420", 00:11:44.434 "trtype": "TCP" 00:11:44.434 }, 00:11:44.434 "peer_address": { 00:11:44.434 "adrfam": "IPv4", 00:11:44.434 "traddr": "10.0.0.1", 00:11:44.434 "trsvcid": "52520", 00:11:44.434 "trtype": "TCP" 00:11:44.434 }, 00:11:44.434 "qid": 0, 00:11:44.434 "state": "enabled", 00:11:44.434 "thread": "nvmf_tgt_poll_group_000" 00:11:44.434 } 00:11:44.434 ]' 00:11:44.434 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:44.693 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:44.693 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:44.693 17:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:44.693 17:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:44.693 17:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.693 17:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.693 17:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.952 17:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:11:44.952 17:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:11:50.254 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.254 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:11:50.254 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.254 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.254 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.254 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:50.254 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:50.254 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:50.254 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:11:50.254 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:50.254 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:50.254 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:50.254 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:50.254 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.254 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:50.254 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.254 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.254 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.254 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:50.254 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:50.254 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:50.254 00:11:50.254 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:50.255 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:50.255 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.255 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.255 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.255 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.255 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.255 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.255 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:50.255 { 00:11:50.255 "auth": { 00:11:50.255 "dhgroup": "null", 00:11:50.255 "digest": "sha256", 00:11:50.255 "state": "completed" 00:11:50.255 }, 00:11:50.255 "cntlid": 3, 00:11:50.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:11:50.255 "listen_address": { 00:11:50.255 "adrfam": "IPv4", 00:11:50.255 "traddr": "10.0.0.3", 00:11:50.255 "trsvcid": "4420", 00:11:50.255 "trtype": "TCP" 00:11:50.255 }, 00:11:50.255 "peer_address": { 00:11:50.255 "adrfam": "IPv4", 00:11:50.255 "traddr": "10.0.0.1", 00:11:50.255 "trsvcid": "49244", 00:11:50.255 "trtype": "TCP" 00:11:50.255 }, 00:11:50.255 "qid": 0, 00:11:50.255 "state": "enabled", 00:11:50.255 "thread": "nvmf_tgt_poll_group_000" 00:11:50.255 } 00:11:50.255 ]' 00:11:50.255 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:50.514 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:50.514 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:50.514 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:50.514 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:50.514 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.514 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.514 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.772 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:11:50.772 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:11:51.709 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.709 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:11:51.709 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.709 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.709 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.709 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:51.709 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:51.709 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:51.968 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:11:51.968 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:51.968 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:51.968 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:51.968 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:51.968 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.968 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:51.968 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.968 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.969 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.969 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:51.969 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:51.969 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:52.228 00:11:52.228 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:52.228 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:52.228 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.487 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.487 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.487 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.487 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.487 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.487 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:52.487 { 00:11:52.487 "auth": { 00:11:52.487 "dhgroup": "null", 00:11:52.487 "digest": "sha256", 00:11:52.487 "state": "completed" 00:11:52.487 }, 00:11:52.487 "cntlid": 5, 00:11:52.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:11:52.487 "listen_address": { 00:11:52.487 "adrfam": "IPv4", 00:11:52.487 "traddr": "10.0.0.3", 00:11:52.487 "trsvcid": "4420", 00:11:52.487 "trtype": "TCP" 00:11:52.487 }, 00:11:52.487 "peer_address": { 00:11:52.487 "adrfam": "IPv4", 00:11:52.487 "traddr": "10.0.0.1", 00:11:52.487 "trsvcid": "49268", 00:11:52.487 "trtype": "TCP" 00:11:52.487 }, 00:11:52.487 "qid": 0, 00:11:52.487 "state": "enabled", 00:11:52.487 "thread": "nvmf_tgt_poll_group_000" 00:11:52.487 } 00:11:52.487 ]' 00:11:52.487 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:52.745 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:52.746 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:52.746 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:52.746 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:52.746 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.746 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.746 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.005 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:11:53.005 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:11:53.573 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.573 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:11:53.573 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.573 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.851 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.851 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:53.851 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:53.851 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:54.110 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:11:54.110 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:54.110 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:54.110 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:54.110 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:54.110 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.110 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key3 00:11:54.110 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.110 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.110 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.110 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:54.110 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:54.110 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:54.367 00:11:54.367 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:54.367 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.367 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:54.989 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.989 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.989 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.989 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.989 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.989 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:54.989 { 00:11:54.989 "auth": { 00:11:54.989 "dhgroup": "null", 00:11:54.989 "digest": "sha256", 00:11:54.989 "state": "completed" 00:11:54.989 }, 00:11:54.989 "cntlid": 7, 00:11:54.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:11:54.989 "listen_address": { 00:11:54.989 "adrfam": "IPv4", 00:11:54.989 "traddr": "10.0.0.3", 00:11:54.989 "trsvcid": "4420", 00:11:54.989 "trtype": "TCP" 00:11:54.989 }, 00:11:54.989 "peer_address": { 00:11:54.989 "adrfam": "IPv4", 00:11:54.989 "traddr": "10.0.0.1", 00:11:54.989 "trsvcid": "49314", 00:11:54.989 "trtype": "TCP" 00:11:54.989 }, 00:11:54.989 "qid": 0, 00:11:54.989 "state": "enabled", 00:11:54.989 "thread": "nvmf_tgt_poll_group_000" 00:11:54.989 } 00:11:54.989 ]' 00:11:54.989 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:54.989 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:54.989 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:54.989 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:54.989 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:54.989 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.989 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.989 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.248 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:11:55.248 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:11:55.815 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:55.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:55.815 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:11:55.815 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.815 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.815 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.815 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:55.815 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:55.815 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:55.815 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:56.382 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:11:56.382 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:56.382 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:56.382 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:56.382 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:56.382 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.382 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:56.382 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.382 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.382 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.382 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:56.382 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:56.382 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:56.641 00:11:56.641 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:56.641 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:56.641 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.900 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.900 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.900 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.900 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.900 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.900 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:56.900 { 00:11:56.900 "auth": { 00:11:56.900 "dhgroup": "ffdhe2048", 00:11:56.900 "digest": "sha256", 00:11:56.900 "state": "completed" 00:11:56.900 }, 00:11:56.900 "cntlid": 9, 00:11:56.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:11:56.900 "listen_address": { 00:11:56.900 "adrfam": "IPv4", 00:11:56.900 "traddr": "10.0.0.3", 00:11:56.900 "trsvcid": "4420", 00:11:56.900 "trtype": "TCP" 00:11:56.900 }, 00:11:56.900 "peer_address": { 00:11:56.900 "adrfam": "IPv4", 00:11:56.900 "traddr": "10.0.0.1", 00:11:56.900 "trsvcid": "49338", 00:11:56.900 "trtype": "TCP" 00:11:56.900 }, 00:11:56.900 "qid": 0, 00:11:56.900 "state": "enabled", 00:11:56.900 "thread": "nvmf_tgt_poll_group_000" 00:11:56.900 } 00:11:56.900 ]' 00:11:56.900 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:57.158 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:57.158 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:57.158 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:57.158 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:57.158 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.158 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.158 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.419 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:11:57.419 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:11:57.988 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.988 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:11:57.988 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.988 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.988 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.988 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:57.988 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:57.988 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:58.557 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:11:58.557 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:58.557 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:58.557 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:58.557 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:58.557 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.557 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.557 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.557 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.557 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.557 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.557 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.557 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.816 00:11:58.816 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:58.816 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:58.816 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.076 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.076 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.076 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.076 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.076 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.076 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:59.076 { 00:11:59.076 "auth": { 00:11:59.076 "dhgroup": "ffdhe2048", 00:11:59.076 "digest": "sha256", 00:11:59.076 "state": "completed" 00:11:59.076 }, 00:11:59.076 "cntlid": 11, 00:11:59.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:11:59.076 "listen_address": { 00:11:59.076 "adrfam": "IPv4", 00:11:59.076 "traddr": "10.0.0.3", 00:11:59.076 "trsvcid": "4420", 00:11:59.076 "trtype": "TCP" 00:11:59.076 }, 00:11:59.076 "peer_address": { 00:11:59.076 "adrfam": "IPv4", 00:11:59.076 "traddr": "10.0.0.1", 00:11:59.076 "trsvcid": "55508", 00:11:59.076 "trtype": "TCP" 00:11:59.076 }, 00:11:59.076 "qid": 0, 00:11:59.076 "state": "enabled", 00:11:59.076 "thread": "nvmf_tgt_poll_group_000" 00:11:59.076 } 00:11:59.076 ]' 00:11:59.076 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:59.076 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:59.076 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:59.076 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:59.076 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:59.076 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.076 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.076 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.643 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:11:59.643 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:12:00.210 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.210 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:12:00.210 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.210 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.210 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.210 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:00.210 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:00.210 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:00.469 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:12:00.469 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:00.469 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:00.469 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:00.469 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:00.469 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.469 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:00.469 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.469 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.469 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.469 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:00.469 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:00.469 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:01.036 00:12:01.036 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:01.036 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.036 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:01.295 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.295 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.295 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.295 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.295 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.295 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:01.295 { 00:12:01.295 "auth": { 00:12:01.295 "dhgroup": "ffdhe2048", 00:12:01.295 "digest": "sha256", 00:12:01.295 "state": "completed" 00:12:01.295 }, 00:12:01.295 "cntlid": 13, 00:12:01.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:12:01.295 "listen_address": { 00:12:01.295 "adrfam": "IPv4", 00:12:01.295 "traddr": "10.0.0.3", 00:12:01.295 "trsvcid": "4420", 00:12:01.295 "trtype": "TCP" 00:12:01.295 }, 00:12:01.295 "peer_address": { 00:12:01.295 "adrfam": "IPv4", 00:12:01.295 "traddr": "10.0.0.1", 00:12:01.295 "trsvcid": "55532", 00:12:01.295 "trtype": "TCP" 00:12:01.295 }, 00:12:01.295 "qid": 0, 00:12:01.295 "state": "enabled", 00:12:01.295 "thread": "nvmf_tgt_poll_group_000" 00:12:01.295 } 00:12:01.295 ]' 00:12:01.295 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:01.295 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:01.295 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:01.553 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:01.553 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:01.553 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.553 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.553 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.812 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:12:01.812 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:12:02.379 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.379 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:12:02.379 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.379 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.379 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.379 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:02.379 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:02.379 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:02.946 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:12:02.946 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:02.946 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:02.946 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:02.946 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:02.946 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.946 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key3 00:12:02.946 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.946 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.946 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.946 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:02.946 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:02.947 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:03.206 00:12:03.206 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:03.206 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.206 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:03.465 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.465 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.465 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.465 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.465 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.465 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:03.465 { 00:12:03.465 "auth": { 00:12:03.465 "dhgroup": "ffdhe2048", 00:12:03.465 "digest": "sha256", 00:12:03.465 "state": "completed" 00:12:03.465 }, 00:12:03.465 "cntlid": 15, 00:12:03.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:12:03.465 "listen_address": { 00:12:03.465 "adrfam": "IPv4", 00:12:03.465 "traddr": "10.0.0.3", 00:12:03.465 "trsvcid": "4420", 00:12:03.465 "trtype": "TCP" 00:12:03.465 }, 00:12:03.465 "peer_address": { 00:12:03.465 "adrfam": "IPv4", 00:12:03.465 "traddr": "10.0.0.1", 00:12:03.465 "trsvcid": "55552", 00:12:03.465 "trtype": "TCP" 00:12:03.465 }, 00:12:03.465 "qid": 0, 00:12:03.465 "state": "enabled", 00:12:03.465 "thread": "nvmf_tgt_poll_group_000" 00:12:03.465 } 00:12:03.465 ]' 00:12:03.465 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:03.465 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:03.465 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:03.465 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:03.465 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:03.724 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.724 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.724 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.983 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:12:03.983 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:12:04.550 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.550 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:12:04.550 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.550 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.550 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.550 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:04.550 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:04.550 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:04.550 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:05.120 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:12:05.120 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:05.120 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:05.120 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:05.120 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:05.120 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.120 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:05.120 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.120 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.120 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.120 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:05.120 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:05.120 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:05.378 00:12:05.378 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:05.378 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.378 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:05.637 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.637 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.637 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.637 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.637 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.637 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:05.637 { 00:12:05.637 "auth": { 00:12:05.637 "dhgroup": "ffdhe3072", 00:12:05.637 "digest": "sha256", 00:12:05.637 "state": "completed" 00:12:05.637 }, 00:12:05.637 "cntlid": 17, 00:12:05.637 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:12:05.637 "listen_address": { 00:12:05.637 "adrfam": "IPv4", 00:12:05.637 "traddr": "10.0.0.3", 00:12:05.637 "trsvcid": "4420", 00:12:05.637 "trtype": "TCP" 00:12:05.637 }, 00:12:05.637 "peer_address": { 00:12:05.637 "adrfam": "IPv4", 00:12:05.637 "traddr": "10.0.0.1", 00:12:05.637 "trsvcid": "55570", 00:12:05.637 "trtype": "TCP" 00:12:05.637 }, 00:12:05.637 "qid": 0, 00:12:05.637 "state": "enabled", 00:12:05.637 "thread": "nvmf_tgt_poll_group_000" 00:12:05.637 } 00:12:05.637 ]' 00:12:05.637 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:05.637 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:05.637 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:05.637 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:05.637 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:05.896 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.896 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.896 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.153 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:12:06.153 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:12:06.719 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.719 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:12:06.719 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.719 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.719 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.719 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:06.719 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:06.719 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:06.977 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:12:06.977 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:06.977 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:06.977 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:06.977 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:06.977 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.977 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.977 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.977 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.977 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.977 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.977 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.977 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:07.543 00:12:07.543 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:07.543 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:07.543 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.802 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.802 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.802 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.802 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.802 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.802 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:07.802 { 00:12:07.802 "auth": { 00:12:07.802 "dhgroup": "ffdhe3072", 00:12:07.802 "digest": "sha256", 00:12:07.802 "state": "completed" 00:12:07.802 }, 00:12:07.802 "cntlid": 19, 00:12:07.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:12:07.802 "listen_address": { 00:12:07.802 "adrfam": "IPv4", 00:12:07.802 "traddr": "10.0.0.3", 00:12:07.802 "trsvcid": "4420", 00:12:07.802 "trtype": "TCP" 00:12:07.802 }, 00:12:07.802 "peer_address": { 00:12:07.802 "adrfam": "IPv4", 00:12:07.802 "traddr": "10.0.0.1", 00:12:07.802 "trsvcid": "55600", 00:12:07.802 "trtype": "TCP" 00:12:07.802 }, 00:12:07.802 "qid": 0, 00:12:07.802 "state": "enabled", 00:12:07.802 "thread": "nvmf_tgt_poll_group_000" 00:12:07.802 } 00:12:07.802 ]' 00:12:07.802 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:07.802 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:07.803 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:07.803 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:07.803 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:07.803 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.803 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.803 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.369 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:12:08.369 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:12:08.937 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.937 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:12:08.937 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.937 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.937 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.937 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:08.937 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:08.937 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:09.196 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:12:09.196 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:09.196 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:09.196 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:09.196 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:09.196 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.196 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:09.196 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.196 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.196 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.196 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:09.196 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:09.196 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:09.455 00:12:09.455 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:09.455 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:09.455 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.023 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.023 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.023 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.023 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.023 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.023 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.023 { 00:12:10.023 "auth": { 00:12:10.023 "dhgroup": "ffdhe3072", 00:12:10.023 "digest": "sha256", 00:12:10.023 "state": "completed" 00:12:10.023 }, 00:12:10.023 "cntlid": 21, 00:12:10.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:12:10.023 "listen_address": { 00:12:10.023 "adrfam": "IPv4", 00:12:10.024 "traddr": "10.0.0.3", 00:12:10.024 "trsvcid": "4420", 00:12:10.024 "trtype": "TCP" 00:12:10.024 }, 00:12:10.024 "peer_address": { 00:12:10.024 "adrfam": "IPv4", 00:12:10.024 "traddr": "10.0.0.1", 00:12:10.024 "trsvcid": "35908", 00:12:10.024 "trtype": "TCP" 00:12:10.024 }, 00:12:10.024 "qid": 0, 00:12:10.024 "state": "enabled", 00:12:10.024 "thread": "nvmf_tgt_poll_group_000" 00:12:10.024 } 00:12:10.024 ]' 00:12:10.024 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.024 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:10.024 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.024 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:10.024 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.024 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.024 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.024 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.282 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:12:10.282 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:12:11.219 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.219 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:12:11.219 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.219 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.219 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.219 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:11.219 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:11.219 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:11.479 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:12:11.479 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:11.479 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:11.479 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:11.479 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:11.479 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.479 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key3 00:12:11.479 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.479 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.479 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.479 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:11.479 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:11.479 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:11.738 00:12:11.738 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:11.738 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.738 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:11.997 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.997 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.997 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.997 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.997 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.997 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:11.997 { 00:12:11.997 "auth": { 00:12:11.997 "dhgroup": "ffdhe3072", 00:12:11.997 "digest": "sha256", 00:12:11.997 "state": "completed" 00:12:11.997 }, 00:12:11.997 "cntlid": 23, 00:12:11.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:12:11.997 "listen_address": { 00:12:11.997 "adrfam": "IPv4", 00:12:11.997 "traddr": "10.0.0.3", 00:12:11.997 "trsvcid": "4420", 00:12:11.997 "trtype": "TCP" 00:12:11.997 }, 00:12:11.997 "peer_address": { 00:12:11.997 "adrfam": "IPv4", 00:12:11.997 "traddr": "10.0.0.1", 00:12:11.997 "trsvcid": "35932", 00:12:11.997 "trtype": "TCP" 00:12:11.997 }, 00:12:11.997 "qid": 0, 00:12:11.997 "state": "enabled", 00:12:11.997 "thread": "nvmf_tgt_poll_group_000" 00:12:11.997 } 00:12:11.997 ]' 00:12:11.997 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:11.997 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:11.997 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:12.256 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:12.256 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.256 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.256 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.256 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.515 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:12:12.515 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:12:13.081 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.082 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:12:13.082 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.082 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.082 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.082 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:13.082 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:13.082 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:13.082 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:13.649 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:12:13.649 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:13.649 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:13.649 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:13.649 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:13.649 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.649 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.649 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.649 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.649 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.649 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.649 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.649 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.907 00:12:13.907 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:13.907 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:13.907 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.166 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.166 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.166 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.166 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.166 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.166 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:14.166 { 00:12:14.166 "auth": { 00:12:14.166 "dhgroup": "ffdhe4096", 00:12:14.166 "digest": "sha256", 00:12:14.166 "state": "completed" 00:12:14.166 }, 00:12:14.166 "cntlid": 25, 00:12:14.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:12:14.166 "listen_address": { 00:12:14.166 "adrfam": "IPv4", 00:12:14.166 "traddr": "10.0.0.3", 00:12:14.166 "trsvcid": "4420", 00:12:14.166 "trtype": "TCP" 00:12:14.166 }, 00:12:14.166 "peer_address": { 00:12:14.166 "adrfam": "IPv4", 00:12:14.166 "traddr": "10.0.0.1", 00:12:14.166 "trsvcid": "35962", 00:12:14.166 "trtype": "TCP" 00:12:14.166 }, 00:12:14.166 "qid": 0, 00:12:14.166 "state": "enabled", 00:12:14.166 "thread": "nvmf_tgt_poll_group_000" 00:12:14.166 } 00:12:14.166 ]' 00:12:14.166 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:14.166 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:14.166 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.166 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:14.166 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.424 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.424 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.424 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.683 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:12:14.683 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:12:15.249 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.249 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:12:15.249 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.249 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.249 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.249 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:15.249 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:15.249 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:15.816 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:12:15.816 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:15.816 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:15.816 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:15.816 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:15.816 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.816 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.816 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.816 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.816 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.816 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.816 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.816 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:16.075 00:12:16.075 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:16.075 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:16.075 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.333 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.333 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.333 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.333 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.333 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.333 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:16.333 { 00:12:16.333 "auth": { 00:12:16.333 "dhgroup": "ffdhe4096", 00:12:16.333 "digest": "sha256", 00:12:16.333 "state": "completed" 00:12:16.333 }, 00:12:16.333 "cntlid": 27, 00:12:16.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:12:16.333 "listen_address": { 00:12:16.333 "adrfam": "IPv4", 00:12:16.333 "traddr": "10.0.0.3", 00:12:16.333 "trsvcid": "4420", 00:12:16.333 "trtype": "TCP" 00:12:16.333 }, 00:12:16.333 "peer_address": { 00:12:16.333 "adrfam": "IPv4", 00:12:16.333 "traddr": "10.0.0.1", 00:12:16.333 "trsvcid": "36000", 00:12:16.333 "trtype": "TCP" 00:12:16.333 }, 00:12:16.333 "qid": 0, 00:12:16.333 "state": "enabled", 00:12:16.333 "thread": "nvmf_tgt_poll_group_000" 00:12:16.333 } 00:12:16.333 ]' 00:12:16.333 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:16.591 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:16.591 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:16.591 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:16.591 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:16.591 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.591 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.591 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.847 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:12:16.847 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:12:17.780 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.780 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:12:17.780 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.780 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.780 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.780 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:17.780 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:17.780 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:17.780 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:12:17.780 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:17.780 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:17.780 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:17.780 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:17.780 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.780 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.780 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.780 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.781 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.781 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.781 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.781 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.346 00:12:18.346 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:18.346 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:18.346 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.604 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.605 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.605 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.605 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.605 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.605 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:18.605 { 00:12:18.605 "auth": { 00:12:18.605 "dhgroup": "ffdhe4096", 00:12:18.605 "digest": "sha256", 00:12:18.605 "state": "completed" 00:12:18.605 }, 00:12:18.605 "cntlid": 29, 00:12:18.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:12:18.605 "listen_address": { 00:12:18.605 "adrfam": "IPv4", 00:12:18.605 "traddr": "10.0.0.3", 00:12:18.605 "trsvcid": "4420", 00:12:18.605 "trtype": "TCP" 00:12:18.605 }, 00:12:18.605 "peer_address": { 00:12:18.605 "adrfam": "IPv4", 00:12:18.605 "traddr": "10.0.0.1", 00:12:18.605 "trsvcid": "36028", 00:12:18.605 "trtype": "TCP" 00:12:18.605 }, 00:12:18.605 "qid": 0, 00:12:18.605 "state": "enabled", 00:12:18.605 "thread": "nvmf_tgt_poll_group_000" 00:12:18.605 } 00:12:18.605 ]' 00:12:18.605 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:18.605 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:18.605 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:18.605 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:18.605 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:18.605 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.605 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.605 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.862 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:12:18.862 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:12:19.798 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.798 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:12:19.798 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.798 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.798 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.798 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:19.798 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:19.798 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:19.798 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:12:19.798 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:19.798 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:19.798 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:19.798 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:19.798 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.798 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key3 00:12:19.798 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.798 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.798 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.798 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:19.798 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:19.798 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:20.367 00:12:20.367 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:20.367 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:20.367 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.625 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.625 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.625 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.625 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.625 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.625 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:20.625 { 00:12:20.625 "auth": { 00:12:20.625 "dhgroup": "ffdhe4096", 00:12:20.625 "digest": "sha256", 00:12:20.625 "state": "completed" 00:12:20.625 }, 00:12:20.625 "cntlid": 31, 00:12:20.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:12:20.625 "listen_address": { 00:12:20.625 "adrfam": "IPv4", 00:12:20.625 "traddr": "10.0.0.3", 00:12:20.625 "trsvcid": "4420", 00:12:20.625 "trtype": "TCP" 00:12:20.625 }, 00:12:20.625 "peer_address": { 00:12:20.625 "adrfam": "IPv4", 00:12:20.625 "traddr": "10.0.0.1", 00:12:20.625 "trsvcid": "43984", 00:12:20.625 "trtype": "TCP" 00:12:20.625 }, 00:12:20.625 "qid": 0, 00:12:20.625 "state": "enabled", 00:12:20.625 "thread": "nvmf_tgt_poll_group_000" 00:12:20.625 } 00:12:20.625 ]' 00:12:20.625 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:20.625 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:20.625 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:20.625 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:20.625 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:20.883 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.883 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.883 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.141 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:12:21.141 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:12:21.708 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.708 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:12:21.708 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.708 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.708 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.708 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:21.708 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:21.708 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:21.708 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:21.978 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:12:21.978 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:21.978 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:21.978 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:21.978 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:21.979 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.979 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.979 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.979 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.979 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.979 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.979 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.979 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:22.557 00:12:22.557 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:22.557 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.557 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:22.816 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.816 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.816 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.816 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.816 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.816 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:22.816 { 00:12:22.816 "auth": { 00:12:22.816 "dhgroup": "ffdhe6144", 00:12:22.816 "digest": "sha256", 00:12:22.816 "state": "completed" 00:12:22.816 }, 00:12:22.816 "cntlid": 33, 00:12:22.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:12:22.816 "listen_address": { 00:12:22.816 "adrfam": "IPv4", 00:12:22.816 "traddr": "10.0.0.3", 00:12:22.816 "trsvcid": "4420", 00:12:22.816 "trtype": "TCP" 00:12:22.816 }, 00:12:22.816 "peer_address": { 00:12:22.816 "adrfam": "IPv4", 00:12:22.816 "traddr": "10.0.0.1", 00:12:22.816 "trsvcid": "44002", 00:12:22.816 "trtype": "TCP" 00:12:22.816 }, 00:12:22.816 "qid": 0, 00:12:22.816 "state": "enabled", 00:12:22.816 "thread": "nvmf_tgt_poll_group_000" 00:12:22.816 } 00:12:22.816 ]' 00:12:22.816 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:22.816 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:22.816 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:23.084 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:23.084 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:23.084 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.084 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.084 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.356 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:12:23.356 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:12:23.922 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.922 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:12:23.922 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.922 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.922 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.922 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:23.922 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:23.922 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:24.179 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:12:24.179 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:24.179 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:24.179 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:24.179 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:24.179 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.180 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:24.180 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.180 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.438 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.438 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:24.438 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:24.438 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:24.696 00:12:24.954 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:24.954 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:24.954 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.954 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.954 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.954 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.954 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.954 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.954 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:24.954 { 00:12:24.954 "auth": { 00:12:24.954 "dhgroup": "ffdhe6144", 00:12:24.954 "digest": "sha256", 00:12:24.954 "state": "completed" 00:12:24.954 }, 00:12:24.954 "cntlid": 35, 00:12:24.954 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:12:24.954 "listen_address": { 00:12:24.954 "adrfam": "IPv4", 00:12:24.954 "traddr": "10.0.0.3", 00:12:24.954 "trsvcid": "4420", 00:12:24.954 "trtype": "TCP" 00:12:24.954 }, 00:12:24.954 "peer_address": { 00:12:24.954 "adrfam": "IPv4", 00:12:24.954 "traddr": "10.0.0.1", 00:12:24.954 "trsvcid": "44026", 00:12:24.954 "trtype": "TCP" 00:12:24.954 }, 00:12:24.954 "qid": 0, 00:12:24.954 "state": "enabled", 00:12:24.954 "thread": "nvmf_tgt_poll_group_000" 00:12:24.954 } 00:12:24.954 ]' 00:12:24.954 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:25.211 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:25.211 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:25.212 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:25.212 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:25.212 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.212 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.212 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.469 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:12:25.469 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:12:26.402 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.402 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:12:26.402 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.402 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.402 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.402 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:26.402 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:26.402 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:26.660 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:12:26.660 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:26.660 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:26.660 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:26.660 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:26.660 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.660 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:26.660 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.660 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.660 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.660 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:26.660 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:26.660 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.223 00:12:27.223 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:27.223 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:27.223 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.481 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.481 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.481 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.481 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.481 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.481 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:27.481 { 00:12:27.481 "auth": { 00:12:27.481 "dhgroup": "ffdhe6144", 00:12:27.481 "digest": "sha256", 00:12:27.481 "state": "completed" 00:12:27.481 }, 00:12:27.481 "cntlid": 37, 00:12:27.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:12:27.481 "listen_address": { 00:12:27.481 "adrfam": "IPv4", 00:12:27.481 "traddr": "10.0.0.3", 00:12:27.481 "trsvcid": "4420", 00:12:27.481 "trtype": "TCP" 00:12:27.481 }, 00:12:27.481 "peer_address": { 00:12:27.481 "adrfam": "IPv4", 00:12:27.481 "traddr": "10.0.0.1", 00:12:27.481 "trsvcid": "44056", 00:12:27.481 "trtype": "TCP" 00:12:27.481 }, 00:12:27.481 "qid": 0, 00:12:27.481 "state": "enabled", 00:12:27.481 "thread": "nvmf_tgt_poll_group_000" 00:12:27.481 } 00:12:27.481 ]' 00:12:27.481 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:27.481 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:27.481 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:27.481 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:27.481 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:27.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.996 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:12:27.996 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:12:28.561 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.561 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:12:28.561 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.561 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.561 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.561 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:28.561 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:28.561 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:28.818 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:12:28.818 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:28.818 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:28.818 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:28.818 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:28.818 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.818 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key3 00:12:28.818 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.818 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.818 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.818 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:28.818 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:28.818 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:29.384 00:12:29.384 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:29.384 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:29.384 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.642 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.642 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.642 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.642 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.642 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.642 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:29.642 { 00:12:29.642 "auth": { 00:12:29.642 "dhgroup": "ffdhe6144", 00:12:29.642 "digest": "sha256", 00:12:29.642 "state": "completed" 00:12:29.642 }, 00:12:29.642 "cntlid": 39, 00:12:29.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:12:29.642 "listen_address": { 00:12:29.642 "adrfam": "IPv4", 00:12:29.642 "traddr": "10.0.0.3", 00:12:29.642 "trsvcid": "4420", 00:12:29.642 "trtype": "TCP" 00:12:29.642 }, 00:12:29.642 "peer_address": { 00:12:29.642 "adrfam": "IPv4", 00:12:29.642 "traddr": "10.0.0.1", 00:12:29.642 "trsvcid": "33274", 00:12:29.642 "trtype": "TCP" 00:12:29.642 }, 00:12:29.642 "qid": 0, 00:12:29.642 "state": "enabled", 00:12:29.642 "thread": "nvmf_tgt_poll_group_000" 00:12:29.642 } 00:12:29.642 ]' 00:12:29.642 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:29.642 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:29.642 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:29.642 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:29.642 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:29.900 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.900 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.900 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.159 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:12:30.159 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:12:30.726 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.726 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:12:30.726 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.726 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.726 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.726 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:30.726 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:30.726 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:30.726 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:30.984 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:12:30.984 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:30.984 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:30.984 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:30.984 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:30.984 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.984 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.984 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.984 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.984 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.984 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.984 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.984 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.551 00:12:31.551 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:31.551 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:31.551 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.118 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.118 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.118 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.118 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.118 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.118 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:32.118 { 00:12:32.118 "auth": { 00:12:32.118 "dhgroup": "ffdhe8192", 00:12:32.118 "digest": "sha256", 00:12:32.118 "state": "completed" 00:12:32.118 }, 00:12:32.118 "cntlid": 41, 00:12:32.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:12:32.118 "listen_address": { 00:12:32.118 "adrfam": "IPv4", 00:12:32.118 "traddr": "10.0.0.3", 00:12:32.118 "trsvcid": "4420", 00:12:32.118 "trtype": "TCP" 00:12:32.118 }, 00:12:32.118 "peer_address": { 00:12:32.118 "adrfam": "IPv4", 00:12:32.118 "traddr": "10.0.0.1", 00:12:32.118 "trsvcid": "33308", 00:12:32.118 "trtype": "TCP" 00:12:32.118 }, 00:12:32.118 "qid": 0, 00:12:32.118 "state": "enabled", 00:12:32.118 "thread": "nvmf_tgt_poll_group_000" 00:12:32.118 } 00:12:32.118 ]' 00:12:32.118 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:32.118 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:32.118 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:32.118 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:32.118 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:32.118 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.118 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.118 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.377 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:12:32.377 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:12:33.313 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.313 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:12:33.313 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.313 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.313 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.313 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:33.313 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:33.313 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:33.313 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:12:33.313 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:33.313 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:33.313 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:33.313 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:33.313 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.313 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.313 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.313 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.313 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.313 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.313 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.313 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.882 00:12:33.882 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:33.882 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:33.882 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.450 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.450 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.450 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.450 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.450 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.450 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:34.450 { 00:12:34.450 "auth": { 00:12:34.450 "dhgroup": "ffdhe8192", 00:12:34.450 "digest": "sha256", 00:12:34.450 "state": "completed" 00:12:34.450 }, 00:12:34.450 "cntlid": 43, 00:12:34.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:12:34.450 "listen_address": { 00:12:34.450 "adrfam": "IPv4", 00:12:34.450 "traddr": "10.0.0.3", 00:12:34.450 "trsvcid": "4420", 00:12:34.450 "trtype": "TCP" 00:12:34.450 }, 00:12:34.450 "peer_address": { 00:12:34.450 "adrfam": "IPv4", 00:12:34.450 "traddr": "10.0.0.1", 00:12:34.450 "trsvcid": "33342", 00:12:34.450 "trtype": "TCP" 00:12:34.450 }, 00:12:34.450 "qid": 0, 00:12:34.450 "state": "enabled", 00:12:34.450 "thread": "nvmf_tgt_poll_group_000" 00:12:34.450 } 00:12:34.450 ]' 00:12:34.450 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:34.450 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:34.450 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:34.450 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:34.450 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:34.450 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.450 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.450 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.709 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:12:34.709 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:12:35.277 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.277 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:12:35.277 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.277 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.277 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.277 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:35.277 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:35.277 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:35.535 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:12:35.535 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:35.535 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:35.535 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:35.535 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:35.535 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.535 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.535 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.535 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.535 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.535 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.535 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.535 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.471 00:12:36.471 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:36.471 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:36.471 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.471 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.729 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.729 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.729 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.729 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.729 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:36.729 { 00:12:36.729 "auth": { 00:12:36.729 "dhgroup": "ffdhe8192", 00:12:36.729 "digest": "sha256", 00:12:36.729 "state": "completed" 00:12:36.729 }, 00:12:36.729 "cntlid": 45, 00:12:36.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:12:36.729 "listen_address": { 00:12:36.729 "adrfam": "IPv4", 00:12:36.729 "traddr": "10.0.0.3", 00:12:36.729 "trsvcid": "4420", 00:12:36.729 "trtype": "TCP" 00:12:36.729 }, 00:12:36.729 "peer_address": { 00:12:36.729 "adrfam": "IPv4", 00:12:36.729 "traddr": "10.0.0.1", 00:12:36.729 "trsvcid": "33368", 00:12:36.729 "trtype": "TCP" 00:12:36.729 }, 00:12:36.729 "qid": 0, 00:12:36.729 "state": "enabled", 00:12:36.729 "thread": "nvmf_tgt_poll_group_000" 00:12:36.729 } 00:12:36.729 ]' 00:12:36.729 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:36.729 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:36.729 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:36.729 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:36.729 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:36.729 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.729 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.729 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.987 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:12:36.987 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:12:37.923 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.923 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:12:37.923 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.923 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.923 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.923 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:37.923 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:37.923 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:38.182 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:12:38.182 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:38.182 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:38.182 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:38.182 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:38.182 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.182 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key3 00:12:38.182 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.182 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.182 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.182 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:38.182 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:38.182 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:38.748 00:12:38.748 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:38.748 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.748 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:39.315 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.315 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.315 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.315 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.315 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.315 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:39.315 { 00:12:39.315 "auth": { 00:12:39.315 "dhgroup": "ffdhe8192", 00:12:39.315 "digest": "sha256", 00:12:39.315 "state": "completed" 00:12:39.315 }, 00:12:39.315 "cntlid": 47, 00:12:39.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:12:39.315 "listen_address": { 00:12:39.315 "adrfam": "IPv4", 00:12:39.315 "traddr": "10.0.0.3", 00:12:39.315 "trsvcid": "4420", 00:12:39.315 "trtype": "TCP" 00:12:39.315 }, 00:12:39.315 "peer_address": { 00:12:39.315 "adrfam": "IPv4", 00:12:39.315 "traddr": "10.0.0.1", 00:12:39.315 "trsvcid": "39364", 00:12:39.315 "trtype": "TCP" 00:12:39.315 }, 00:12:39.315 "qid": 0, 00:12:39.315 "state": "enabled", 00:12:39.315 "thread": "nvmf_tgt_poll_group_000" 00:12:39.315 } 00:12:39.315 ]' 00:12:39.315 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:39.315 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:39.315 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:39.315 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:39.315 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:39.315 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.315 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.315 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.576 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:12:39.576 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:12:40.509 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.509 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:12:40.509 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.509 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.509 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.509 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:40.509 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:40.509 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:40.509 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:40.509 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:40.509 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:12:40.509 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:40.509 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:40.509 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:40.509 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:40.509 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.509 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.509 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.509 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.767 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.767 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.767 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.767 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.025 00:12:41.026 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:41.026 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:41.026 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.284 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.284 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.284 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.284 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.284 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.284 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:41.284 { 00:12:41.284 "auth": { 00:12:41.284 "dhgroup": "null", 00:12:41.284 "digest": "sha384", 00:12:41.284 "state": "completed" 00:12:41.284 }, 00:12:41.284 "cntlid": 49, 00:12:41.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:12:41.284 "listen_address": { 00:12:41.284 "adrfam": "IPv4", 00:12:41.284 "traddr": "10.0.0.3", 00:12:41.284 "trsvcid": "4420", 00:12:41.284 "trtype": "TCP" 00:12:41.284 }, 00:12:41.284 "peer_address": { 00:12:41.284 "adrfam": "IPv4", 00:12:41.284 "traddr": "10.0.0.1", 00:12:41.284 "trsvcid": "39398", 00:12:41.284 "trtype": "TCP" 00:12:41.284 }, 00:12:41.284 "qid": 0, 00:12:41.284 "state": "enabled", 00:12:41.284 "thread": "nvmf_tgt_poll_group_000" 00:12:41.284 } 00:12:41.284 ]' 00:12:41.284 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:41.542 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:41.542 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:41.542 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:41.542 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:41.542 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.542 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.542 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.800 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:12:41.800 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:12:42.734 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.734 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:12:42.734 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.734 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.734 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.734 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:42.734 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:42.734 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:42.992 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:12:42.992 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:42.992 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:42.992 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:42.992 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:42.992 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.992 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.992 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.992 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.992 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.992 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.992 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.992 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.251 00:12:43.251 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:43.251 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.251 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:43.509 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.509 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.509 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.509 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.509 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.509 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:43.509 { 00:12:43.509 "auth": { 00:12:43.509 "dhgroup": "null", 00:12:43.509 "digest": "sha384", 00:12:43.509 "state": "completed" 00:12:43.509 }, 00:12:43.509 "cntlid": 51, 00:12:43.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:12:43.509 "listen_address": { 00:12:43.509 "adrfam": "IPv4", 00:12:43.509 "traddr": "10.0.0.3", 00:12:43.509 "trsvcid": "4420", 00:12:43.509 "trtype": "TCP" 00:12:43.509 }, 00:12:43.509 "peer_address": { 00:12:43.509 "adrfam": "IPv4", 00:12:43.509 "traddr": "10.0.0.1", 00:12:43.509 "trsvcid": "39426", 00:12:43.509 "trtype": "TCP" 00:12:43.509 }, 00:12:43.509 "qid": 0, 00:12:43.509 "state": "enabled", 00:12:43.509 "thread": "nvmf_tgt_poll_group_000" 00:12:43.509 } 00:12:43.510 ]' 00:12:43.510 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:43.510 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:43.510 17:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:43.510 17:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:43.510 17:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:43.768 17:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.768 17:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.768 17:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.026 17:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:12:44.026 17:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:12:44.959 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.959 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:12:44.959 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.959 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.959 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.959 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:44.959 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:44.959 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:44.959 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:12:44.959 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:44.959 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:44.959 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:44.959 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:44.959 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.959 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.959 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.959 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.959 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.959 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.960 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.960 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:45.527 00:12:45.527 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:45.527 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.527 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:45.786 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.786 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.786 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.786 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.786 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.786 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:45.786 { 00:12:45.786 "auth": { 00:12:45.786 "dhgroup": "null", 00:12:45.786 "digest": "sha384", 00:12:45.786 "state": "completed" 00:12:45.786 }, 00:12:45.786 "cntlid": 53, 00:12:45.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:12:45.786 "listen_address": { 00:12:45.786 "adrfam": "IPv4", 00:12:45.786 "traddr": "10.0.0.3", 00:12:45.786 "trsvcid": "4420", 00:12:45.786 "trtype": "TCP" 00:12:45.786 }, 00:12:45.786 "peer_address": { 00:12:45.786 "adrfam": "IPv4", 00:12:45.786 "traddr": "10.0.0.1", 00:12:45.786 "trsvcid": "39438", 00:12:45.786 "trtype": "TCP" 00:12:45.786 }, 00:12:45.786 "qid": 0, 00:12:45.786 "state": "enabled", 00:12:45.786 "thread": "nvmf_tgt_poll_group_000" 00:12:45.786 } 00:12:45.786 ]' 00:12:45.786 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:45.786 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:45.786 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:45.786 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:45.786 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:45.786 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.786 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.786 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.044 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:12:46.044 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:12:46.610 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.610 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:12:46.610 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.610 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.610 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.610 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:46.610 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:46.610 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:46.869 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:12:46.869 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:46.869 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:46.869 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:46.869 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:46.869 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.869 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key3 00:12:46.869 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.869 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.869 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.869 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:46.869 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:46.869 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:47.127 00:12:47.128 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:47.128 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.128 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:47.386 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.386 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.386 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.386 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.386 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.386 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:47.386 { 00:12:47.386 "auth": { 00:12:47.386 "dhgroup": "null", 00:12:47.386 "digest": "sha384", 00:12:47.386 "state": "completed" 00:12:47.386 }, 00:12:47.386 "cntlid": 55, 00:12:47.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:12:47.386 "listen_address": { 00:12:47.386 "adrfam": "IPv4", 00:12:47.386 "traddr": "10.0.0.3", 00:12:47.386 "trsvcid": "4420", 00:12:47.386 "trtype": "TCP" 00:12:47.386 }, 00:12:47.386 "peer_address": { 00:12:47.386 "adrfam": "IPv4", 00:12:47.386 "traddr": "10.0.0.1", 00:12:47.386 "trsvcid": "39458", 00:12:47.386 "trtype": "TCP" 00:12:47.386 }, 00:12:47.386 "qid": 0, 00:12:47.386 "state": "enabled", 00:12:47.386 "thread": "nvmf_tgt_poll_group_000" 00:12:47.386 } 00:12:47.386 ]' 00:12:47.386 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:47.644 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:47.644 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:47.644 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:47.644 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:47.644 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.644 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.644 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.902 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:12:47.902 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:12:48.469 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.469 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:12:48.469 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.469 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.469 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.469 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:48.469 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:48.469 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:48.469 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:48.728 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:12:48.728 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:48.728 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:48.728 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:48.728 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:48.728 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.728 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.728 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.728 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.728 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.728 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.728 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.728 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.987 00:12:48.987 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:48.987 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:48.987 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.246 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.246 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.246 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.246 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.246 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.246 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:49.246 { 00:12:49.246 "auth": { 00:12:49.246 "dhgroup": "ffdhe2048", 00:12:49.246 "digest": "sha384", 00:12:49.246 "state": "completed" 00:12:49.246 }, 00:12:49.246 "cntlid": 57, 00:12:49.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:12:49.246 "listen_address": { 00:12:49.246 "adrfam": "IPv4", 00:12:49.246 "traddr": "10.0.0.3", 00:12:49.246 "trsvcid": "4420", 00:12:49.246 "trtype": "TCP" 00:12:49.246 }, 00:12:49.246 "peer_address": { 00:12:49.246 "adrfam": "IPv4", 00:12:49.246 "traddr": "10.0.0.1", 00:12:49.246 "trsvcid": "43016", 00:12:49.246 "trtype": "TCP" 00:12:49.246 }, 00:12:49.246 "qid": 0, 00:12:49.246 "state": "enabled", 00:12:49.246 "thread": "nvmf_tgt_poll_group_000" 00:12:49.246 } 00:12:49.246 ]' 00:12:49.246 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:49.246 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:49.246 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:49.246 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:49.246 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:49.505 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.505 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.505 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.505 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:12:49.505 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:12:50.072 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.072 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:12:50.072 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.072 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.072 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.072 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:50.072 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:50.072 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:50.640 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:12:50.640 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:50.640 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:50.640 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:50.640 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:50.640 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.640 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.640 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.640 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.640 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.640 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.640 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.640 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.898 00:12:50.898 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:50.898 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:50.898 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.157 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.157 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.157 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.157 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.157 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.157 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:51.157 { 00:12:51.157 "auth": { 00:12:51.157 "dhgroup": "ffdhe2048", 00:12:51.157 "digest": "sha384", 00:12:51.157 "state": "completed" 00:12:51.157 }, 00:12:51.157 "cntlid": 59, 00:12:51.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:12:51.157 "listen_address": { 00:12:51.157 "adrfam": "IPv4", 00:12:51.157 "traddr": "10.0.0.3", 00:12:51.157 "trsvcid": "4420", 00:12:51.157 "trtype": "TCP" 00:12:51.157 }, 00:12:51.157 "peer_address": { 00:12:51.157 "adrfam": "IPv4", 00:12:51.157 "traddr": "10.0.0.1", 00:12:51.157 "trsvcid": "43046", 00:12:51.157 "trtype": "TCP" 00:12:51.157 }, 00:12:51.157 "qid": 0, 00:12:51.157 "state": "enabled", 00:12:51.157 "thread": "nvmf_tgt_poll_group_000" 00:12:51.157 } 00:12:51.157 ]' 00:12:51.157 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:51.157 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:51.157 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.157 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:51.157 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.157 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.414 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.415 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.415 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:12:51.415 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:12:52.346 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.346 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:12:52.346 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.346 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.346 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.346 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:52.346 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:52.346 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:52.604 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:12:52.604 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:52.604 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:52.604 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:52.604 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:52.604 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.604 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.604 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.604 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.604 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.604 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.604 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.604 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.862 00:12:52.862 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:52.862 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:52.862 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.119 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.120 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.120 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.120 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.120 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.120 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:53.120 { 00:12:53.120 "auth": { 00:12:53.120 "dhgroup": "ffdhe2048", 00:12:53.120 "digest": "sha384", 00:12:53.120 "state": "completed" 00:12:53.120 }, 00:12:53.120 "cntlid": 61, 00:12:53.120 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:12:53.120 "listen_address": { 00:12:53.120 "adrfam": "IPv4", 00:12:53.120 "traddr": "10.0.0.3", 00:12:53.120 "trsvcid": "4420", 00:12:53.120 "trtype": "TCP" 00:12:53.120 }, 00:12:53.120 "peer_address": { 00:12:53.120 "adrfam": "IPv4", 00:12:53.120 "traddr": "10.0.0.1", 00:12:53.120 "trsvcid": "43066", 00:12:53.120 "trtype": "TCP" 00:12:53.120 }, 00:12:53.120 "qid": 0, 00:12:53.120 "state": "enabled", 00:12:53.120 "thread": "nvmf_tgt_poll_group_000" 00:12:53.120 } 00:12:53.120 ]' 00:12:53.120 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:53.120 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:53.120 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:53.377 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:53.377 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:53.377 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.377 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.377 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.636 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:12:53.636 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:12:54.203 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.203 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:12:54.203 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.203 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.203 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.203 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:54.203 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:54.203 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:54.462 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:12:54.462 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:54.462 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:54.462 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:54.462 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:54.462 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.462 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key3 00:12:54.462 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.462 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.462 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.462 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:54.463 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:54.463 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:55.029 00:12:55.029 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:55.029 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.029 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:55.289 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.289 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.289 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.289 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.289 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.289 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:55.289 { 00:12:55.289 "auth": { 00:12:55.289 "dhgroup": "ffdhe2048", 00:12:55.289 "digest": "sha384", 00:12:55.289 "state": "completed" 00:12:55.289 }, 00:12:55.289 "cntlid": 63, 00:12:55.289 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:12:55.289 "listen_address": { 00:12:55.289 "adrfam": "IPv4", 00:12:55.289 "traddr": "10.0.0.3", 00:12:55.289 "trsvcid": "4420", 00:12:55.289 "trtype": "TCP" 00:12:55.289 }, 00:12:55.289 "peer_address": { 00:12:55.289 "adrfam": "IPv4", 00:12:55.289 "traddr": "10.0.0.1", 00:12:55.289 "trsvcid": "43098", 00:12:55.289 "trtype": "TCP" 00:12:55.289 }, 00:12:55.289 "qid": 0, 00:12:55.289 "state": "enabled", 00:12:55.289 "thread": "nvmf_tgt_poll_group_000" 00:12:55.289 } 00:12:55.289 ]' 00:12:55.289 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:55.289 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:55.289 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:55.289 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:55.289 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:55.547 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.547 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.547 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.805 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:12:55.805 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:12:56.741 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.741 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:12:56.741 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.741 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.741 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.741 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:56.741 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:56.741 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:56.741 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:57.000 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:12:57.000 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:57.000 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:57.000 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:57.000 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:57.000 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.000 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:57.000 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.000 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.000 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.000 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:57.000 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:57.000 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:57.259 00:12:57.259 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:57.259 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.259 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:57.826 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.826 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.826 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.826 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.826 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.826 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:57.826 { 00:12:57.826 "auth": { 00:12:57.826 "dhgroup": "ffdhe3072", 00:12:57.826 "digest": "sha384", 00:12:57.826 "state": "completed" 00:12:57.826 }, 00:12:57.826 "cntlid": 65, 00:12:57.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:12:57.826 "listen_address": { 00:12:57.826 "adrfam": "IPv4", 00:12:57.826 "traddr": "10.0.0.3", 00:12:57.826 "trsvcid": "4420", 00:12:57.826 "trtype": "TCP" 00:12:57.826 }, 00:12:57.826 "peer_address": { 00:12:57.826 "adrfam": "IPv4", 00:12:57.826 "traddr": "10.0.0.1", 00:12:57.826 "trsvcid": "43120", 00:12:57.826 "trtype": "TCP" 00:12:57.826 }, 00:12:57.826 "qid": 0, 00:12:57.826 "state": "enabled", 00:12:57.826 "thread": "nvmf_tgt_poll_group_000" 00:12:57.826 } 00:12:57.826 ]' 00:12:57.826 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:57.827 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:57.827 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:57.827 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:57.827 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:57.827 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.827 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.827 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.393 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:12:58.393 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:12:58.959 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.959 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:12:58.959 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.959 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.959 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.959 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:58.959 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:58.959 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:59.218 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:12:59.218 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:59.218 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:59.218 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:59.218 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:59.218 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.218 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.218 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.218 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.218 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.218 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.218 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.218 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.786 00:12:59.786 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:59.786 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.786 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:00.046 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.046 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.046 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.046 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.046 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.046 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:00.046 { 00:13:00.046 "auth": { 00:13:00.046 "dhgroup": "ffdhe3072", 00:13:00.046 "digest": "sha384", 00:13:00.046 "state": "completed" 00:13:00.046 }, 00:13:00.046 "cntlid": 67, 00:13:00.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:00.046 "listen_address": { 00:13:00.046 "adrfam": "IPv4", 00:13:00.046 "traddr": "10.0.0.3", 00:13:00.046 "trsvcid": "4420", 00:13:00.046 "trtype": "TCP" 00:13:00.046 }, 00:13:00.046 "peer_address": { 00:13:00.046 "adrfam": "IPv4", 00:13:00.046 "traddr": "10.0.0.1", 00:13:00.046 "trsvcid": "34106", 00:13:00.046 "trtype": "TCP" 00:13:00.046 }, 00:13:00.046 "qid": 0, 00:13:00.046 "state": "enabled", 00:13:00.046 "thread": "nvmf_tgt_poll_group_000" 00:13:00.046 } 00:13:00.046 ]' 00:13:00.046 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:00.305 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:00.305 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:00.305 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:00.305 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:00.305 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.305 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.305 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:00.570 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:13:00.570 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:13:01.508 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.508 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:13:01.508 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.508 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.508 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.508 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:01.508 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:01.508 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:01.767 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:13:01.767 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:01.767 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:01.767 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:01.767 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:01.767 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.767 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:01.767 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.767 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.767 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.767 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:01.767 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:01.767 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:02.027 00:13:02.027 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:02.027 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:02.027 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.287 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.287 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.287 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.287 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.546 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.546 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.546 { 00:13:02.546 "auth": { 00:13:02.546 "dhgroup": "ffdhe3072", 00:13:02.546 "digest": "sha384", 00:13:02.546 "state": "completed" 00:13:02.546 }, 00:13:02.546 "cntlid": 69, 00:13:02.546 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:02.546 "listen_address": { 00:13:02.546 "adrfam": "IPv4", 00:13:02.546 "traddr": "10.0.0.3", 00:13:02.546 "trsvcid": "4420", 00:13:02.546 "trtype": "TCP" 00:13:02.546 }, 00:13:02.546 "peer_address": { 00:13:02.546 "adrfam": "IPv4", 00:13:02.546 "traddr": "10.0.0.1", 00:13:02.546 "trsvcid": "34138", 00:13:02.546 "trtype": "TCP" 00:13:02.546 }, 00:13:02.546 "qid": 0, 00:13:02.546 "state": "enabled", 00:13:02.546 "thread": "nvmf_tgt_poll_group_000" 00:13:02.546 } 00:13:02.546 ]' 00:13:02.546 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:02.546 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:02.546 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:02.546 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:02.546 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:02.546 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.546 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.546 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.805 17:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:13:02.805 17:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:13:03.741 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.741 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:13:03.741 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.741 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.741 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.741 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:03.741 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:03.741 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:04.001 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:13:04.001 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:04.001 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:04.001 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:04.001 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:04.001 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.001 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key3 00:13:04.001 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.001 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.001 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.001 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:04.001 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:04.001 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:04.570 00:13:04.570 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.570 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.570 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.830 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.830 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.830 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.830 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.830 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.830 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.830 { 00:13:04.830 "auth": { 00:13:04.830 "dhgroup": "ffdhe3072", 00:13:04.830 "digest": "sha384", 00:13:04.830 "state": "completed" 00:13:04.830 }, 00:13:04.830 "cntlid": 71, 00:13:04.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:04.830 "listen_address": { 00:13:04.830 "adrfam": "IPv4", 00:13:04.830 "traddr": "10.0.0.3", 00:13:04.830 "trsvcid": "4420", 00:13:04.830 "trtype": "TCP" 00:13:04.830 }, 00:13:04.830 "peer_address": { 00:13:04.830 "adrfam": "IPv4", 00:13:04.830 "traddr": "10.0.0.1", 00:13:04.830 "trsvcid": "34168", 00:13:04.830 "trtype": "TCP" 00:13:04.830 }, 00:13:04.830 "qid": 0, 00:13:04.830 "state": "enabled", 00:13:04.830 "thread": "nvmf_tgt_poll_group_000" 00:13:04.830 } 00:13:04.830 ]' 00:13:04.830 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.830 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:04.830 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.830 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:04.830 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:04.830 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.830 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.830 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.400 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:13:05.400 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:13:05.969 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.969 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:13:05.969 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.969 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.969 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.969 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:05.969 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:05.969 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:05.969 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:06.537 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:13:06.537 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:06.537 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:06.537 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:06.537 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:06.537 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.537 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:06.537 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.537 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.537 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.537 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:06.537 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:06.537 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:06.797 00:13:07.057 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:07.057 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:07.057 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.317 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.317 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.317 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.317 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.317 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.317 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:07.317 { 00:13:07.317 "auth": { 00:13:07.317 "dhgroup": "ffdhe4096", 00:13:07.317 "digest": "sha384", 00:13:07.317 "state": "completed" 00:13:07.317 }, 00:13:07.317 "cntlid": 73, 00:13:07.317 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:07.317 "listen_address": { 00:13:07.317 "adrfam": "IPv4", 00:13:07.317 "traddr": "10.0.0.3", 00:13:07.317 "trsvcid": "4420", 00:13:07.317 "trtype": "TCP" 00:13:07.317 }, 00:13:07.317 "peer_address": { 00:13:07.317 "adrfam": "IPv4", 00:13:07.317 "traddr": "10.0.0.1", 00:13:07.317 "trsvcid": "34188", 00:13:07.317 "trtype": "TCP" 00:13:07.317 }, 00:13:07.317 "qid": 0, 00:13:07.317 "state": "enabled", 00:13:07.317 "thread": "nvmf_tgt_poll_group_000" 00:13:07.317 } 00:13:07.317 ]' 00:13:07.317 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:07.317 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:07.317 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:07.317 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:07.317 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:07.317 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.317 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.317 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.884 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:13:07.884 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:13:08.452 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.452 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:13:08.452 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.452 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.452 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.452 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:08.452 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:08.452 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:09.019 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:13:09.019 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:09.019 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:09.019 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:09.019 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:09.019 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.019 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:09.019 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.019 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.019 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.019 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:09.019 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:09.020 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:09.278 00:13:09.278 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:09.278 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.278 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:09.538 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.538 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.538 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.538 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.538 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.538 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:09.538 { 00:13:09.538 "auth": { 00:13:09.538 "dhgroup": "ffdhe4096", 00:13:09.538 "digest": "sha384", 00:13:09.538 "state": "completed" 00:13:09.538 }, 00:13:09.538 "cntlid": 75, 00:13:09.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:09.538 "listen_address": { 00:13:09.538 "adrfam": "IPv4", 00:13:09.538 "traddr": "10.0.0.3", 00:13:09.538 "trsvcid": "4420", 00:13:09.538 "trtype": "TCP" 00:13:09.538 }, 00:13:09.538 "peer_address": { 00:13:09.538 "adrfam": "IPv4", 00:13:09.538 "traddr": "10.0.0.1", 00:13:09.538 "trsvcid": "46202", 00:13:09.538 "trtype": "TCP" 00:13:09.538 }, 00:13:09.538 "qid": 0, 00:13:09.538 "state": "enabled", 00:13:09.538 "thread": "nvmf_tgt_poll_group_000" 00:13:09.538 } 00:13:09.538 ]' 00:13:09.796 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:09.796 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:09.796 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:09.796 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:09.796 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:09.796 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.796 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.797 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.056 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:13:10.056 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:13:10.994 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.994 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:13:10.994 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.994 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.994 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.994 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:10.994 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:10.994 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:11.254 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:13:11.254 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:11.255 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:11.255 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:11.255 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:11.255 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:11.255 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:11.255 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.255 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.255 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.255 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:11.255 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:11.255 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:11.823 00:13:11.823 17:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:11.823 17:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:11.823 17:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.081 17:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.081 17:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.081 17:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.081 17:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.081 17:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.081 17:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:12.081 { 00:13:12.081 "auth": { 00:13:12.081 "dhgroup": "ffdhe4096", 00:13:12.081 "digest": "sha384", 00:13:12.081 "state": "completed" 00:13:12.081 }, 00:13:12.081 "cntlid": 77, 00:13:12.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:12.081 "listen_address": { 00:13:12.081 "adrfam": "IPv4", 00:13:12.081 "traddr": "10.0.0.3", 00:13:12.081 "trsvcid": "4420", 00:13:12.081 "trtype": "TCP" 00:13:12.081 }, 00:13:12.081 "peer_address": { 00:13:12.081 "adrfam": "IPv4", 00:13:12.081 "traddr": "10.0.0.1", 00:13:12.081 "trsvcid": "46214", 00:13:12.081 "trtype": "TCP" 00:13:12.081 }, 00:13:12.081 "qid": 0, 00:13:12.081 "state": "enabled", 00:13:12.081 "thread": "nvmf_tgt_poll_group_000" 00:13:12.081 } 00:13:12.081 ]' 00:13:12.081 17:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:12.081 17:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:12.081 17:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:12.081 17:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:12.081 17:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:12.082 17:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.082 17:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.082 17:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.340 17:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:13:12.340 17:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:13:12.906 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.906 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:13:12.906 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.906 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.906 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.165 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:13.165 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:13.165 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:13.423 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:13:13.423 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:13.423 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:13.423 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:13.423 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:13.423 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:13.423 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key3 00:13:13.423 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.424 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.424 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.424 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:13.424 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:13.424 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:13.682 00:13:13.682 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:13.682 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:13.682 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.941 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.941 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.941 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.941 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.941 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.941 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:13.941 { 00:13:13.941 "auth": { 00:13:13.941 "dhgroup": "ffdhe4096", 00:13:13.941 "digest": "sha384", 00:13:13.941 "state": "completed" 00:13:13.941 }, 00:13:13.941 "cntlid": 79, 00:13:13.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:13.941 "listen_address": { 00:13:13.941 "adrfam": "IPv4", 00:13:13.941 "traddr": "10.0.0.3", 00:13:13.941 "trsvcid": "4420", 00:13:13.941 "trtype": "TCP" 00:13:13.941 }, 00:13:13.941 "peer_address": { 00:13:13.941 "adrfam": "IPv4", 00:13:13.941 "traddr": "10.0.0.1", 00:13:13.941 "trsvcid": "46236", 00:13:13.941 "trtype": "TCP" 00:13:13.941 }, 00:13:13.941 "qid": 0, 00:13:13.941 "state": "enabled", 00:13:13.941 "thread": "nvmf_tgt_poll_group_000" 00:13:13.941 } 00:13:13.941 ]' 00:13:13.941 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:14.200 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:14.200 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:14.200 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:14.200 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:14.200 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:14.200 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:14.200 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.459 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:13:14.459 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:13:15.059 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:15.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:15.059 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:13:15.059 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.059 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.059 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.059 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:15.059 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:15.059 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:15.059 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:15.318 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:13:15.318 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:15.318 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:15.318 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:15.318 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:15.318 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.318 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:15.318 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.318 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.318 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.318 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:15.318 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:15.318 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:15.885 00:13:15.885 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:15.885 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:15.885 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:16.143 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:16.143 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:16.143 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.143 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.143 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.143 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:16.143 { 00:13:16.143 "auth": { 00:13:16.143 "dhgroup": "ffdhe6144", 00:13:16.143 "digest": "sha384", 00:13:16.143 "state": "completed" 00:13:16.143 }, 00:13:16.143 "cntlid": 81, 00:13:16.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:16.143 "listen_address": { 00:13:16.143 "adrfam": "IPv4", 00:13:16.143 "traddr": "10.0.0.3", 00:13:16.143 "trsvcid": "4420", 00:13:16.143 "trtype": "TCP" 00:13:16.143 }, 00:13:16.143 "peer_address": { 00:13:16.143 "adrfam": "IPv4", 00:13:16.143 "traddr": "10.0.0.1", 00:13:16.143 "trsvcid": "46268", 00:13:16.143 "trtype": "TCP" 00:13:16.143 }, 00:13:16.143 "qid": 0, 00:13:16.143 "state": "enabled", 00:13:16.143 "thread": "nvmf_tgt_poll_group_000" 00:13:16.143 } 00:13:16.143 ]' 00:13:16.144 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:16.144 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:16.144 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:16.144 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:16.144 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:16.402 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.402 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.402 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.661 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:13:16.661 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:13:17.230 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.230 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:13:17.230 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.230 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.230 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.230 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:17.230 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:17.230 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:17.489 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:13:17.489 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:17.489 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:17.489 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:17.489 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:17.489 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.489 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:17.489 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.489 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.489 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.489 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:17.489 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:17.489 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.057 00:13:18.057 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:18.057 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.057 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:18.317 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:18.317 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:18.317 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.317 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.317 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.317 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:18.317 { 00:13:18.317 "auth": { 00:13:18.317 "dhgroup": "ffdhe6144", 00:13:18.317 "digest": "sha384", 00:13:18.317 "state": "completed" 00:13:18.317 }, 00:13:18.317 "cntlid": 83, 00:13:18.317 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:18.317 "listen_address": { 00:13:18.317 "adrfam": "IPv4", 00:13:18.317 "traddr": "10.0.0.3", 00:13:18.317 "trsvcid": "4420", 00:13:18.317 "trtype": "TCP" 00:13:18.317 }, 00:13:18.317 "peer_address": { 00:13:18.317 "adrfam": "IPv4", 00:13:18.317 "traddr": "10.0.0.1", 00:13:18.317 "trsvcid": "46298", 00:13:18.317 "trtype": "TCP" 00:13:18.317 }, 00:13:18.317 "qid": 0, 00:13:18.317 "state": "enabled", 00:13:18.317 "thread": "nvmf_tgt_poll_group_000" 00:13:18.317 } 00:13:18.317 ]' 00:13:18.317 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:18.317 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:18.317 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:18.317 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:18.317 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:18.317 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:18.317 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:18.317 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.576 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:13:18.576 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:13:19.513 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.513 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:13:19.513 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.513 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.513 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.513 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:19.513 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:19.513 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:19.513 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:13:19.513 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:19.513 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:19.513 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:19.513 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:19.513 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.513 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.513 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.513 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.513 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.513 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.513 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.513 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:20.081 00:13:20.081 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:20.081 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.081 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:20.340 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.340 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.341 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.341 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.341 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.341 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:20.341 { 00:13:20.341 "auth": { 00:13:20.341 "dhgroup": "ffdhe6144", 00:13:20.341 "digest": "sha384", 00:13:20.341 "state": "completed" 00:13:20.341 }, 00:13:20.341 "cntlid": 85, 00:13:20.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:20.341 "listen_address": { 00:13:20.341 "adrfam": "IPv4", 00:13:20.341 "traddr": "10.0.0.3", 00:13:20.341 "trsvcid": "4420", 00:13:20.341 "trtype": "TCP" 00:13:20.341 }, 00:13:20.341 "peer_address": { 00:13:20.341 "adrfam": "IPv4", 00:13:20.341 "traddr": "10.0.0.1", 00:13:20.341 "trsvcid": "47382", 00:13:20.341 "trtype": "TCP" 00:13:20.341 }, 00:13:20.341 "qid": 0, 00:13:20.341 "state": "enabled", 00:13:20.341 "thread": "nvmf_tgt_poll_group_000" 00:13:20.341 } 00:13:20.341 ]' 00:13:20.341 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:20.341 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:20.341 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:20.341 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:20.341 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:20.600 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.600 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.600 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.859 17:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:13:20.860 17:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:13:21.428 17:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.428 17:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:13:21.428 17:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.428 17:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.428 17:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.428 17:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:21.428 17:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:21.428 17:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:21.686 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:13:21.686 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:21.686 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:21.686 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:21.686 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:21.686 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.686 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key3 00:13:21.686 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.687 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.687 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.687 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:21.687 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:21.687 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:21.945 00:13:22.204 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:22.204 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:22.204 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.463 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.463 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.463 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.463 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.463 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.463 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:22.463 { 00:13:22.463 "auth": { 00:13:22.463 "dhgroup": "ffdhe6144", 00:13:22.463 "digest": "sha384", 00:13:22.463 "state": "completed" 00:13:22.463 }, 00:13:22.463 "cntlid": 87, 00:13:22.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:22.463 "listen_address": { 00:13:22.463 "adrfam": "IPv4", 00:13:22.463 "traddr": "10.0.0.3", 00:13:22.463 "trsvcid": "4420", 00:13:22.463 "trtype": "TCP" 00:13:22.463 }, 00:13:22.463 "peer_address": { 00:13:22.463 "adrfam": "IPv4", 00:13:22.463 "traddr": "10.0.0.1", 00:13:22.463 "trsvcid": "47394", 00:13:22.463 "trtype": "TCP" 00:13:22.463 }, 00:13:22.463 "qid": 0, 00:13:22.463 "state": "enabled", 00:13:22.463 "thread": "nvmf_tgt_poll_group_000" 00:13:22.463 } 00:13:22.463 ]' 00:13:22.463 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:22.463 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:22.463 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:22.463 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:22.463 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:22.463 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.463 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.463 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.825 17:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:13:22.826 17:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:13:23.412 17:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.412 17:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:13:23.412 17:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.412 17:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.412 17:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.412 17:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:23.412 17:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:23.412 17:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:23.412 17:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:23.671 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:13:23.671 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:23.671 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:23.671 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:23.671 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:23.671 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.671 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:23.671 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.671 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.671 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.671 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:23.671 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:23.671 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:24.237 00:13:24.237 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:24.237 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.237 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:24.805 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.805 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.805 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.805 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.805 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.805 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:24.805 { 00:13:24.805 "auth": { 00:13:24.805 "dhgroup": "ffdhe8192", 00:13:24.805 "digest": "sha384", 00:13:24.805 "state": "completed" 00:13:24.805 }, 00:13:24.805 "cntlid": 89, 00:13:24.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:24.805 "listen_address": { 00:13:24.805 "adrfam": "IPv4", 00:13:24.805 "traddr": "10.0.0.3", 00:13:24.805 "trsvcid": "4420", 00:13:24.805 "trtype": "TCP" 00:13:24.805 }, 00:13:24.805 "peer_address": { 00:13:24.805 "adrfam": "IPv4", 00:13:24.805 "traddr": "10.0.0.1", 00:13:24.805 "trsvcid": "47424", 00:13:24.805 "trtype": "TCP" 00:13:24.805 }, 00:13:24.805 "qid": 0, 00:13:24.805 "state": "enabled", 00:13:24.805 "thread": "nvmf_tgt_poll_group_000" 00:13:24.805 } 00:13:24.805 ]' 00:13:24.805 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:24.805 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:24.805 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:24.805 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:24.805 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:24.805 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.805 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.805 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.064 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:13:25.065 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:13:26.001 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.001 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:13:26.001 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.001 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.001 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.001 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:26.001 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:26.001 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:26.001 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:13:26.001 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:26.001 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:26.001 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:26.001 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:26.001 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.001 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.001 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.001 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.001 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.001 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.001 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.001 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.937 00:13:26.937 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:26.937 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.937 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:26.937 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.937 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.937 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.937 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.937 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.937 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:26.937 { 00:13:26.937 "auth": { 00:13:26.937 "dhgroup": "ffdhe8192", 00:13:26.937 "digest": "sha384", 00:13:26.937 "state": "completed" 00:13:26.937 }, 00:13:26.937 "cntlid": 91, 00:13:26.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:26.937 "listen_address": { 00:13:26.937 "adrfam": "IPv4", 00:13:26.937 "traddr": "10.0.0.3", 00:13:26.937 "trsvcid": "4420", 00:13:26.937 "trtype": "TCP" 00:13:26.937 }, 00:13:26.937 "peer_address": { 00:13:26.937 "adrfam": "IPv4", 00:13:26.937 "traddr": "10.0.0.1", 00:13:26.937 "trsvcid": "47458", 00:13:26.937 "trtype": "TCP" 00:13:26.937 }, 00:13:26.937 "qid": 0, 00:13:26.937 "state": "enabled", 00:13:26.937 "thread": "nvmf_tgt_poll_group_000" 00:13:26.937 } 00:13:26.937 ]' 00:13:26.938 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:27.196 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:27.196 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:27.196 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:27.196 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:27.196 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.196 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.196 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.455 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:13:27.455 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:13:28.023 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.023 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:13:28.023 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.023 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.023 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.023 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:28.023 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:28.023 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:28.590 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:13:28.590 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:28.590 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:28.590 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:28.590 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:28.590 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.590 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.590 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.590 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.590 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.590 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.590 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.590 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.158 00:13:29.158 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.158 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:29.158 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.416 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.416 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.416 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.416 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.416 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.416 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:29.416 { 00:13:29.416 "auth": { 00:13:29.416 "dhgroup": "ffdhe8192", 00:13:29.416 "digest": "sha384", 00:13:29.416 "state": "completed" 00:13:29.416 }, 00:13:29.416 "cntlid": 93, 00:13:29.416 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:29.416 "listen_address": { 00:13:29.416 "adrfam": "IPv4", 00:13:29.416 "traddr": "10.0.0.3", 00:13:29.416 "trsvcid": "4420", 00:13:29.416 "trtype": "TCP" 00:13:29.416 }, 00:13:29.416 "peer_address": { 00:13:29.416 "adrfam": "IPv4", 00:13:29.416 "traddr": "10.0.0.1", 00:13:29.416 "trsvcid": "51196", 00:13:29.416 "trtype": "TCP" 00:13:29.416 }, 00:13:29.416 "qid": 0, 00:13:29.416 "state": "enabled", 00:13:29.416 "thread": "nvmf_tgt_poll_group_000" 00:13:29.416 } 00:13:29.416 ]' 00:13:29.416 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:29.416 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:29.417 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:29.417 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:29.417 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:29.417 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.417 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.417 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.675 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:13:29.675 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:13:30.241 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.241 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:13:30.241 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.241 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.241 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.241 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:30.241 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:30.241 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:30.499 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:13:30.499 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:30.499 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:30.499 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:30.499 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:30.499 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:30.499 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key3 00:13:30.499 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.499 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.499 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.499 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:30.499 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:30.499 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:31.435 00:13:31.435 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:31.435 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:31.435 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.435 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.435 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.435 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.435 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.435 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.436 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:31.436 { 00:13:31.436 "auth": { 00:13:31.436 "dhgroup": "ffdhe8192", 00:13:31.436 "digest": "sha384", 00:13:31.436 "state": "completed" 00:13:31.436 }, 00:13:31.436 "cntlid": 95, 00:13:31.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:31.436 "listen_address": { 00:13:31.436 "adrfam": "IPv4", 00:13:31.436 "traddr": "10.0.0.3", 00:13:31.436 "trsvcid": "4420", 00:13:31.436 "trtype": "TCP" 00:13:31.436 }, 00:13:31.436 "peer_address": { 00:13:31.436 "adrfam": "IPv4", 00:13:31.436 "traddr": "10.0.0.1", 00:13:31.436 "trsvcid": "51226", 00:13:31.436 "trtype": "TCP" 00:13:31.436 }, 00:13:31.436 "qid": 0, 00:13:31.436 "state": "enabled", 00:13:31.436 "thread": "nvmf_tgt_poll_group_000" 00:13:31.436 } 00:13:31.436 ]' 00:13:31.436 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:31.695 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:31.695 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:31.695 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:31.695 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:31.695 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.695 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.695 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.953 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:13:31.953 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:13:32.521 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.521 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:13:32.521 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.521 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.521 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.521 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:32.521 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:32.521 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:32.521 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:32.521 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:32.780 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:13:32.780 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:32.780 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:32.780 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:32.780 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:32.780 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.780 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:32.780 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.780 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.780 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.780 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:32.780 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:32.780 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.348 00:13:33.348 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:33.348 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.348 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.607 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.607 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.607 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.607 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.607 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.607 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:33.607 { 00:13:33.607 "auth": { 00:13:33.607 "dhgroup": "null", 00:13:33.607 "digest": "sha512", 00:13:33.607 "state": "completed" 00:13:33.607 }, 00:13:33.607 "cntlid": 97, 00:13:33.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:33.607 "listen_address": { 00:13:33.607 "adrfam": "IPv4", 00:13:33.607 "traddr": "10.0.0.3", 00:13:33.607 "trsvcid": "4420", 00:13:33.607 "trtype": "TCP" 00:13:33.607 }, 00:13:33.607 "peer_address": { 00:13:33.607 "adrfam": "IPv4", 00:13:33.607 "traddr": "10.0.0.1", 00:13:33.607 "trsvcid": "51266", 00:13:33.607 "trtype": "TCP" 00:13:33.607 }, 00:13:33.607 "qid": 0, 00:13:33.607 "state": "enabled", 00:13:33.607 "thread": "nvmf_tgt_poll_group_000" 00:13:33.607 } 00:13:33.607 ]' 00:13:33.607 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:33.607 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:33.608 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:33.608 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:33.608 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:33.608 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.608 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.608 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.867 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:13:33.867 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:13:34.805 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.805 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:13:34.805 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.805 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.805 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.805 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:34.805 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:34.805 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:34.805 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:13:34.805 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:34.805 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:34.805 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:34.805 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:34.805 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.805 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:34.805 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.805 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.805 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.805 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:34.805 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:34.805 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.064 00:13:35.064 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:35.064 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:35.064 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.323 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.323 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.323 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.323 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.323 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.323 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:35.323 { 00:13:35.323 "auth": { 00:13:35.323 "dhgroup": "null", 00:13:35.323 "digest": "sha512", 00:13:35.323 "state": "completed" 00:13:35.323 }, 00:13:35.323 "cntlid": 99, 00:13:35.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:35.323 "listen_address": { 00:13:35.323 "adrfam": "IPv4", 00:13:35.323 "traddr": "10.0.0.3", 00:13:35.323 "trsvcid": "4420", 00:13:35.323 "trtype": "TCP" 00:13:35.323 }, 00:13:35.323 "peer_address": { 00:13:35.323 "adrfam": "IPv4", 00:13:35.323 "traddr": "10.0.0.1", 00:13:35.323 "trsvcid": "51294", 00:13:35.323 "trtype": "TCP" 00:13:35.323 }, 00:13:35.323 "qid": 0, 00:13:35.323 "state": "enabled", 00:13:35.323 "thread": "nvmf_tgt_poll_group_000" 00:13:35.323 } 00:13:35.323 ]' 00:13:35.323 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:35.582 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:35.582 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:35.582 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:35.582 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:35.582 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.582 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.582 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:35.841 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:13:35.841 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:13:36.408 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.408 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:13:36.408 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.408 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.408 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.408 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:36.408 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:36.408 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:36.672 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:13:36.672 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:36.672 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:36.672 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:36.672 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:36.672 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.672 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:36.672 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.672 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.672 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.672 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:36.672 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:36.672 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:36.965 00:13:36.965 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:36.965 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.965 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:37.223 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.223 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.223 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.223 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.223 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.223 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:37.223 { 00:13:37.223 "auth": { 00:13:37.223 "dhgroup": "null", 00:13:37.223 "digest": "sha512", 00:13:37.223 "state": "completed" 00:13:37.223 }, 00:13:37.223 "cntlid": 101, 00:13:37.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:37.223 "listen_address": { 00:13:37.223 "adrfam": "IPv4", 00:13:37.223 "traddr": "10.0.0.3", 00:13:37.223 "trsvcid": "4420", 00:13:37.223 "trtype": "TCP" 00:13:37.223 }, 00:13:37.223 "peer_address": { 00:13:37.223 "adrfam": "IPv4", 00:13:37.223 "traddr": "10.0.0.1", 00:13:37.223 "trsvcid": "51310", 00:13:37.223 "trtype": "TCP" 00:13:37.223 }, 00:13:37.223 "qid": 0, 00:13:37.223 "state": "enabled", 00:13:37.223 "thread": "nvmf_tgt_poll_group_000" 00:13:37.223 } 00:13:37.223 ]' 00:13:37.223 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:37.482 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:37.482 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:37.482 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:37.482 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:37.482 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.482 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.482 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.739 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:13:37.739 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:13:38.307 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.307 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:13:38.307 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.307 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.566 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.566 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:38.566 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:38.566 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:38.824 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:13:38.824 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:38.824 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:38.824 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:38.825 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:38.825 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.825 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key3 00:13:38.825 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.825 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.825 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.825 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:38.825 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:38.825 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:39.084 00:13:39.084 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.084 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.084 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.344 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.344 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.344 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.344 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.344 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.344 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:39.344 { 00:13:39.344 "auth": { 00:13:39.344 "dhgroup": "null", 00:13:39.344 "digest": "sha512", 00:13:39.344 "state": "completed" 00:13:39.344 }, 00:13:39.344 "cntlid": 103, 00:13:39.344 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:39.344 "listen_address": { 00:13:39.344 "adrfam": "IPv4", 00:13:39.344 "traddr": "10.0.0.3", 00:13:39.344 "trsvcid": "4420", 00:13:39.344 "trtype": "TCP" 00:13:39.344 }, 00:13:39.344 "peer_address": { 00:13:39.344 "adrfam": "IPv4", 00:13:39.344 "traddr": "10.0.0.1", 00:13:39.344 "trsvcid": "33188", 00:13:39.344 "trtype": "TCP" 00:13:39.344 }, 00:13:39.344 "qid": 0, 00:13:39.344 "state": "enabled", 00:13:39.344 "thread": "nvmf_tgt_poll_group_000" 00:13:39.344 } 00:13:39.344 ]' 00:13:39.344 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.344 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:39.344 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.344 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:39.344 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:39.344 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.344 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.344 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.911 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:13:39.911 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:13:40.170 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.170 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:13:40.170 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.170 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.170 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.170 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:40.170 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:40.170 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:40.170 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:40.738 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:13:40.738 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:40.738 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:40.738 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:40.738 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:40.738 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.738 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.738 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.738 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.738 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.738 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.738 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.738 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.997 00:13:40.997 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:40.997 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:40.997 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.255 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.255 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.255 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.255 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.255 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.255 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:41.255 { 00:13:41.255 "auth": { 00:13:41.255 "dhgroup": "ffdhe2048", 00:13:41.255 "digest": "sha512", 00:13:41.255 "state": "completed" 00:13:41.255 }, 00:13:41.255 "cntlid": 105, 00:13:41.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:41.255 "listen_address": { 00:13:41.255 "adrfam": "IPv4", 00:13:41.255 "traddr": "10.0.0.3", 00:13:41.255 "trsvcid": "4420", 00:13:41.255 "trtype": "TCP" 00:13:41.255 }, 00:13:41.255 "peer_address": { 00:13:41.255 "adrfam": "IPv4", 00:13:41.255 "traddr": "10.0.0.1", 00:13:41.255 "trsvcid": "33222", 00:13:41.255 "trtype": "TCP" 00:13:41.255 }, 00:13:41.255 "qid": 0, 00:13:41.255 "state": "enabled", 00:13:41.255 "thread": "nvmf_tgt_poll_group_000" 00:13:41.255 } 00:13:41.255 ]' 00:13:41.255 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:41.255 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:41.255 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:41.255 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:41.255 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:41.255 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.255 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.255 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.822 17:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:13:41.822 17:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:13:42.389 17:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.389 17:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:13:42.389 17:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.389 17:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.389 17:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.389 17:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:42.389 17:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:42.389 17:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:42.647 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:13:42.647 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:42.647 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:42.647 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:42.647 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:42.647 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.647 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.647 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.647 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.647 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.647 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.647 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.647 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.906 00:13:42.906 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:42.906 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:42.906 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.164 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.164 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.164 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.164 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.164 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.164 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:43.164 { 00:13:43.164 "auth": { 00:13:43.164 "dhgroup": "ffdhe2048", 00:13:43.164 "digest": "sha512", 00:13:43.164 "state": "completed" 00:13:43.164 }, 00:13:43.164 "cntlid": 107, 00:13:43.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:43.164 "listen_address": { 00:13:43.164 "adrfam": "IPv4", 00:13:43.164 "traddr": "10.0.0.3", 00:13:43.164 "trsvcid": "4420", 00:13:43.164 "trtype": "TCP" 00:13:43.164 }, 00:13:43.164 "peer_address": { 00:13:43.164 "adrfam": "IPv4", 00:13:43.164 "traddr": "10.0.0.1", 00:13:43.164 "trsvcid": "33254", 00:13:43.164 "trtype": "TCP" 00:13:43.164 }, 00:13:43.164 "qid": 0, 00:13:43.164 "state": "enabled", 00:13:43.164 "thread": "nvmf_tgt_poll_group_000" 00:13:43.164 } 00:13:43.164 ]' 00:13:43.164 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:43.422 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:43.422 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:43.422 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:43.422 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:43.422 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.422 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.422 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.680 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:13:43.680 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:13:44.248 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.248 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:13:44.248 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.248 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.248 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.248 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:44.248 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:44.248 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:44.506 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:13:44.506 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:44.506 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:44.506 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:44.506 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:44.506 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.506 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:44.506 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.506 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.506 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.506 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:44.506 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:44.507 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:44.765 00:13:44.765 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:44.765 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:44.765 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.024 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.024 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.024 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.024 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.282 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.282 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:45.282 { 00:13:45.282 "auth": { 00:13:45.282 "dhgroup": "ffdhe2048", 00:13:45.282 "digest": "sha512", 00:13:45.282 "state": "completed" 00:13:45.282 }, 00:13:45.282 "cntlid": 109, 00:13:45.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:45.282 "listen_address": { 00:13:45.282 "adrfam": "IPv4", 00:13:45.282 "traddr": "10.0.0.3", 00:13:45.282 "trsvcid": "4420", 00:13:45.282 "trtype": "TCP" 00:13:45.282 }, 00:13:45.282 "peer_address": { 00:13:45.282 "adrfam": "IPv4", 00:13:45.282 "traddr": "10.0.0.1", 00:13:45.282 "trsvcid": "33266", 00:13:45.282 "trtype": "TCP" 00:13:45.282 }, 00:13:45.282 "qid": 0, 00:13:45.282 "state": "enabled", 00:13:45.282 "thread": "nvmf_tgt_poll_group_000" 00:13:45.282 } 00:13:45.282 ]' 00:13:45.282 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:45.282 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:45.282 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:45.282 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:45.282 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:45.282 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.282 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.282 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.540 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:13:45.540 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:13:46.107 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.107 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:13:46.107 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.107 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.107 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.107 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:46.107 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:46.107 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:46.673 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:13:46.673 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:46.673 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:46.673 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:46.673 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:46.673 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.673 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key3 00:13:46.673 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.673 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.673 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.673 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:46.673 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:46.673 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:46.933 00:13:46.933 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:46.933 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.933 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:47.192 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.192 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.192 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.192 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.192 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.192 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:47.192 { 00:13:47.192 "auth": { 00:13:47.192 "dhgroup": "ffdhe2048", 00:13:47.192 "digest": "sha512", 00:13:47.192 "state": "completed" 00:13:47.192 }, 00:13:47.192 "cntlid": 111, 00:13:47.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:47.192 "listen_address": { 00:13:47.192 "adrfam": "IPv4", 00:13:47.192 "traddr": "10.0.0.3", 00:13:47.192 "trsvcid": "4420", 00:13:47.192 "trtype": "TCP" 00:13:47.192 }, 00:13:47.192 "peer_address": { 00:13:47.192 "adrfam": "IPv4", 00:13:47.192 "traddr": "10.0.0.1", 00:13:47.192 "trsvcid": "33300", 00:13:47.192 "trtype": "TCP" 00:13:47.192 }, 00:13:47.192 "qid": 0, 00:13:47.192 "state": "enabled", 00:13:47.192 "thread": "nvmf_tgt_poll_group_000" 00:13:47.192 } 00:13:47.192 ]' 00:13:47.192 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:47.192 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:47.192 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:47.192 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:47.192 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:47.450 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.450 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.451 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.709 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:13:47.709 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:13:48.288 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.288 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:13:48.288 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.288 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.288 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.288 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:48.288 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:48.288 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:48.288 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:48.560 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:13:48.560 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:48.560 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:48.560 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:48.560 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:48.560 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.560 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.560 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.560 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.560 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.560 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.561 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.561 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.818 00:13:48.818 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:48.818 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.818 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:49.075 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.075 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.075 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.075 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.076 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.076 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:49.076 { 00:13:49.076 "auth": { 00:13:49.076 "dhgroup": "ffdhe3072", 00:13:49.076 "digest": "sha512", 00:13:49.076 "state": "completed" 00:13:49.076 }, 00:13:49.076 "cntlid": 113, 00:13:49.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:49.076 "listen_address": { 00:13:49.076 "adrfam": "IPv4", 00:13:49.076 "traddr": "10.0.0.3", 00:13:49.076 "trsvcid": "4420", 00:13:49.076 "trtype": "TCP" 00:13:49.076 }, 00:13:49.076 "peer_address": { 00:13:49.076 "adrfam": "IPv4", 00:13:49.076 "traddr": "10.0.0.1", 00:13:49.076 "trsvcid": "38338", 00:13:49.076 "trtype": "TCP" 00:13:49.076 }, 00:13:49.076 "qid": 0, 00:13:49.076 "state": "enabled", 00:13:49.076 "thread": "nvmf_tgt_poll_group_000" 00:13:49.076 } 00:13:49.076 ]' 00:13:49.076 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:49.333 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:49.333 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:49.333 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:49.333 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:49.333 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.333 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.333 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.590 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:13:49.590 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:13:50.523 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.523 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:13:50.523 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.523 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.523 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.523 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:50.523 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:50.523 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:50.523 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:13:50.523 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:50.523 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:50.523 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:50.523 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:50.523 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.523 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.523 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.523 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.523 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.523 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.523 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.523 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.780 00:13:51.039 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:51.039 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:51.039 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.039 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.039 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.039 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.039 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.039 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.298 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:51.298 { 00:13:51.298 "auth": { 00:13:51.298 "dhgroup": "ffdhe3072", 00:13:51.298 "digest": "sha512", 00:13:51.298 "state": "completed" 00:13:51.298 }, 00:13:51.298 "cntlid": 115, 00:13:51.298 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:51.298 "listen_address": { 00:13:51.298 "adrfam": "IPv4", 00:13:51.298 "traddr": "10.0.0.3", 00:13:51.298 "trsvcid": "4420", 00:13:51.298 "trtype": "TCP" 00:13:51.298 }, 00:13:51.298 "peer_address": { 00:13:51.298 "adrfam": "IPv4", 00:13:51.298 "traddr": "10.0.0.1", 00:13:51.298 "trsvcid": "38376", 00:13:51.298 "trtype": "TCP" 00:13:51.298 }, 00:13:51.298 "qid": 0, 00:13:51.298 "state": "enabled", 00:13:51.298 "thread": "nvmf_tgt_poll_group_000" 00:13:51.298 } 00:13:51.298 ]' 00:13:51.298 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:51.298 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:51.298 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:51.298 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:51.298 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:51.298 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.298 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.298 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.557 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:13:51.557 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:13:52.124 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.124 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:13:52.124 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.124 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.383 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.383 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:52.383 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:52.383 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:52.641 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:13:52.641 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:52.641 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:52.641 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:52.641 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:52.641 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.641 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:52.641 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.641 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.642 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.642 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:52.642 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:52.642 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:52.900 00:13:52.900 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:52.900 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:52.900 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.159 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.159 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.159 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.159 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.159 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.159 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:53.159 { 00:13:53.159 "auth": { 00:13:53.159 "dhgroup": "ffdhe3072", 00:13:53.159 "digest": "sha512", 00:13:53.159 "state": "completed" 00:13:53.159 }, 00:13:53.159 "cntlid": 117, 00:13:53.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:53.159 "listen_address": { 00:13:53.159 "adrfam": "IPv4", 00:13:53.159 "traddr": "10.0.0.3", 00:13:53.159 "trsvcid": "4420", 00:13:53.159 "trtype": "TCP" 00:13:53.159 }, 00:13:53.159 "peer_address": { 00:13:53.159 "adrfam": "IPv4", 00:13:53.159 "traddr": "10.0.0.1", 00:13:53.159 "trsvcid": "38388", 00:13:53.159 "trtype": "TCP" 00:13:53.159 }, 00:13:53.159 "qid": 0, 00:13:53.159 "state": "enabled", 00:13:53.159 "thread": "nvmf_tgt_poll_group_000" 00:13:53.159 } 00:13:53.159 ]' 00:13:53.159 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:53.159 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:53.159 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:53.418 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:53.418 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:53.418 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.418 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.418 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.677 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:13:53.677 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:13:54.244 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.244 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:13:54.244 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.244 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.244 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.244 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:54.244 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:54.244 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:54.503 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:13:54.503 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:54.503 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:54.503 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:54.503 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:54.503 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.503 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key3 00:13:54.503 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.503 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.503 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.503 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:54.503 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:54.503 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:54.762 00:13:55.020 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:55.020 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:55.020 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.279 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.279 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.279 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.279 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.279 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.280 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:55.280 { 00:13:55.280 "auth": { 00:13:55.280 "dhgroup": "ffdhe3072", 00:13:55.280 "digest": "sha512", 00:13:55.280 "state": "completed" 00:13:55.280 }, 00:13:55.280 "cntlid": 119, 00:13:55.280 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:55.280 "listen_address": { 00:13:55.280 "adrfam": "IPv4", 00:13:55.280 "traddr": "10.0.0.3", 00:13:55.280 "trsvcid": "4420", 00:13:55.280 "trtype": "TCP" 00:13:55.280 }, 00:13:55.280 "peer_address": { 00:13:55.280 "adrfam": "IPv4", 00:13:55.280 "traddr": "10.0.0.1", 00:13:55.280 "trsvcid": "38410", 00:13:55.280 "trtype": "TCP" 00:13:55.280 }, 00:13:55.280 "qid": 0, 00:13:55.280 "state": "enabled", 00:13:55.280 "thread": "nvmf_tgt_poll_group_000" 00:13:55.280 } 00:13:55.280 ]' 00:13:55.280 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:55.280 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:55.280 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:55.280 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:55.280 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:55.280 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.280 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.280 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.539 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:13:55.539 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:13:56.106 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.365 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:13:56.365 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.365 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.365 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.365 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:56.365 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:56.365 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:56.365 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:56.623 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:13:56.623 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:56.623 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:56.623 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:56.623 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:56.623 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.623 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.623 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.623 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.623 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.623 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.623 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.623 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.891 00:13:56.891 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:56.891 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:56.891 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.149 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.149 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.149 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.149 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.149 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.149 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:57.149 { 00:13:57.149 "auth": { 00:13:57.149 "dhgroup": "ffdhe4096", 00:13:57.149 "digest": "sha512", 00:13:57.149 "state": "completed" 00:13:57.149 }, 00:13:57.149 "cntlid": 121, 00:13:57.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:57.149 "listen_address": { 00:13:57.149 "adrfam": "IPv4", 00:13:57.149 "traddr": "10.0.0.3", 00:13:57.149 "trsvcid": "4420", 00:13:57.149 "trtype": "TCP" 00:13:57.149 }, 00:13:57.149 "peer_address": { 00:13:57.149 "adrfam": "IPv4", 00:13:57.149 "traddr": "10.0.0.1", 00:13:57.149 "trsvcid": "38434", 00:13:57.149 "trtype": "TCP" 00:13:57.149 }, 00:13:57.149 "qid": 0, 00:13:57.149 "state": "enabled", 00:13:57.149 "thread": "nvmf_tgt_poll_group_000" 00:13:57.149 } 00:13:57.149 ]' 00:13:57.149 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:57.408 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:57.408 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:57.408 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:57.408 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:57.408 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.408 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.408 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.666 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:13:57.666 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:13:58.233 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.233 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:13:58.233 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.233 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.233 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.233 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:58.233 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:58.233 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:58.492 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:13:58.492 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:58.492 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:58.492 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:58.492 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:58.492 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.492 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.492 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.492 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.492 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.492 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.492 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.492 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.750 00:13:58.750 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:58.750 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.750 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:59.317 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.317 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.317 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.317 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.317 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.317 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:59.317 { 00:13:59.317 "auth": { 00:13:59.317 "dhgroup": "ffdhe4096", 00:13:59.317 "digest": "sha512", 00:13:59.317 "state": "completed" 00:13:59.317 }, 00:13:59.317 "cntlid": 123, 00:13:59.317 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:13:59.317 "listen_address": { 00:13:59.317 "adrfam": "IPv4", 00:13:59.317 "traddr": "10.0.0.3", 00:13:59.317 "trsvcid": "4420", 00:13:59.317 "trtype": "TCP" 00:13:59.317 }, 00:13:59.317 "peer_address": { 00:13:59.317 "adrfam": "IPv4", 00:13:59.317 "traddr": "10.0.0.1", 00:13:59.317 "trsvcid": "38784", 00:13:59.317 "trtype": "TCP" 00:13:59.317 }, 00:13:59.317 "qid": 0, 00:13:59.317 "state": "enabled", 00:13:59.317 "thread": "nvmf_tgt_poll_group_000" 00:13:59.317 } 00:13:59.317 ]' 00:13:59.317 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:59.317 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:59.317 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:59.317 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:59.317 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:59.317 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.317 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.317 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.577 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:13:59.577 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:14:00.512 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.512 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:14:00.512 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.512 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.512 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.512 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:00.512 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:00.512 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:00.512 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:14:00.512 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:00.512 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:00.512 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:00.512 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:00.512 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.512 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.512 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.512 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.512 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.512 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.512 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.512 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.080 00:14:01.080 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.080 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.080 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.393 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.393 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.393 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.393 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.393 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.393 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:01.393 { 00:14:01.393 "auth": { 00:14:01.393 "dhgroup": "ffdhe4096", 00:14:01.393 "digest": "sha512", 00:14:01.393 "state": "completed" 00:14:01.393 }, 00:14:01.393 "cntlid": 125, 00:14:01.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:14:01.393 "listen_address": { 00:14:01.393 "adrfam": "IPv4", 00:14:01.393 "traddr": "10.0.0.3", 00:14:01.393 "trsvcid": "4420", 00:14:01.393 "trtype": "TCP" 00:14:01.393 }, 00:14:01.393 "peer_address": { 00:14:01.393 "adrfam": "IPv4", 00:14:01.393 "traddr": "10.0.0.1", 00:14:01.393 "trsvcid": "38816", 00:14:01.393 "trtype": "TCP" 00:14:01.393 }, 00:14:01.393 "qid": 0, 00:14:01.393 "state": "enabled", 00:14:01.393 "thread": "nvmf_tgt_poll_group_000" 00:14:01.393 } 00:14:01.393 ]' 00:14:01.393 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:01.393 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:01.393 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:01.393 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:01.393 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:01.393 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.393 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.393 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.652 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:14:01.652 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:14:02.218 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.218 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:14:02.218 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.219 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.477 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.477 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:02.477 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:02.477 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:02.736 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:14:02.736 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:02.736 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:02.736 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:02.736 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:02.736 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.736 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key3 00:14:02.736 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.736 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.736 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.736 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:02.736 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:02.736 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:02.995 00:14:02.995 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:02.995 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:02.995 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.254 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.254 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.254 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.254 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.254 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.254 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:03.254 { 00:14:03.254 "auth": { 00:14:03.254 "dhgroup": "ffdhe4096", 00:14:03.254 "digest": "sha512", 00:14:03.254 "state": "completed" 00:14:03.254 }, 00:14:03.254 "cntlid": 127, 00:14:03.254 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:14:03.254 "listen_address": { 00:14:03.254 "adrfam": "IPv4", 00:14:03.254 "traddr": "10.0.0.3", 00:14:03.254 "trsvcid": "4420", 00:14:03.254 "trtype": "TCP" 00:14:03.254 }, 00:14:03.254 "peer_address": { 00:14:03.254 "adrfam": "IPv4", 00:14:03.254 "traddr": "10.0.0.1", 00:14:03.254 "trsvcid": "38846", 00:14:03.254 "trtype": "TCP" 00:14:03.254 }, 00:14:03.254 "qid": 0, 00:14:03.254 "state": "enabled", 00:14:03.254 "thread": "nvmf_tgt_poll_group_000" 00:14:03.254 } 00:14:03.254 ]' 00:14:03.254 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:03.512 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:03.512 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:03.512 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:03.512 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:03.512 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.512 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.512 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.771 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:14:03.771 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:14:04.339 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.598 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:14:04.598 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.598 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.598 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.598 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:04.598 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:04.598 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:04.598 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:04.857 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:14:04.857 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:04.857 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:04.857 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:04.857 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:04.857 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.857 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.857 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.857 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.857 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.857 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.857 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.857 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.116 00:14:05.375 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:05.375 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.375 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:05.634 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.634 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.634 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.634 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.634 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.634 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:05.634 { 00:14:05.634 "auth": { 00:14:05.634 "dhgroup": "ffdhe6144", 00:14:05.634 "digest": "sha512", 00:14:05.634 "state": "completed" 00:14:05.634 }, 00:14:05.634 "cntlid": 129, 00:14:05.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:14:05.634 "listen_address": { 00:14:05.634 "adrfam": "IPv4", 00:14:05.634 "traddr": "10.0.0.3", 00:14:05.634 "trsvcid": "4420", 00:14:05.634 "trtype": "TCP" 00:14:05.634 }, 00:14:05.634 "peer_address": { 00:14:05.634 "adrfam": "IPv4", 00:14:05.634 "traddr": "10.0.0.1", 00:14:05.634 "trsvcid": "38880", 00:14:05.634 "trtype": "TCP" 00:14:05.634 }, 00:14:05.634 "qid": 0, 00:14:05.634 "state": "enabled", 00:14:05.634 "thread": "nvmf_tgt_poll_group_000" 00:14:05.634 } 00:14:05.634 ]' 00:14:05.634 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:05.634 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:05.634 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:05.634 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:05.634 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:05.634 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.634 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.634 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.202 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:14:06.202 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:14:06.770 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.771 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:14:06.771 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.771 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.771 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.771 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:06.771 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:06.771 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:07.030 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:14:07.030 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:07.030 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:07.030 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:07.030 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:07.030 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.030 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.030 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.030 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.030 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.030 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.030 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.030 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.596 00:14:07.596 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:07.596 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:07.596 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.854 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.854 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.854 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.854 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.854 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.854 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:07.854 { 00:14:07.854 "auth": { 00:14:07.854 "dhgroup": "ffdhe6144", 00:14:07.854 "digest": "sha512", 00:14:07.854 "state": "completed" 00:14:07.854 }, 00:14:07.854 "cntlid": 131, 00:14:07.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:14:07.854 "listen_address": { 00:14:07.854 "adrfam": "IPv4", 00:14:07.854 "traddr": "10.0.0.3", 00:14:07.854 "trsvcid": "4420", 00:14:07.854 "trtype": "TCP" 00:14:07.854 }, 00:14:07.854 "peer_address": { 00:14:07.854 "adrfam": "IPv4", 00:14:07.854 "traddr": "10.0.0.1", 00:14:07.854 "trsvcid": "38914", 00:14:07.854 "trtype": "TCP" 00:14:07.854 }, 00:14:07.854 "qid": 0, 00:14:07.854 "state": "enabled", 00:14:07.854 "thread": "nvmf_tgt_poll_group_000" 00:14:07.854 } 00:14:07.854 ]' 00:14:07.854 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:07.854 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:07.854 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:07.854 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:07.854 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:07.854 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.854 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.854 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.112 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:14:08.112 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:14:08.680 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.680 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:14:08.680 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.680 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.680 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.680 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:08.680 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:08.680 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:08.940 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:14:08.940 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:08.940 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:08.940 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:08.940 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:08.940 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.940 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.940 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.940 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.199 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.199 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.199 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.199 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.458 00:14:09.458 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:09.458 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.458 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:09.717 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.717 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.717 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.717 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.717 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.717 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:09.717 { 00:14:09.717 "auth": { 00:14:09.717 "dhgroup": "ffdhe6144", 00:14:09.717 "digest": "sha512", 00:14:09.717 "state": "completed" 00:14:09.717 }, 00:14:09.717 "cntlid": 133, 00:14:09.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:14:09.717 "listen_address": { 00:14:09.717 "adrfam": "IPv4", 00:14:09.717 "traddr": "10.0.0.3", 00:14:09.717 "trsvcid": "4420", 00:14:09.717 "trtype": "TCP" 00:14:09.717 }, 00:14:09.717 "peer_address": { 00:14:09.717 "adrfam": "IPv4", 00:14:09.717 "traddr": "10.0.0.1", 00:14:09.717 "trsvcid": "43582", 00:14:09.717 "trtype": "TCP" 00:14:09.717 }, 00:14:09.717 "qid": 0, 00:14:09.717 "state": "enabled", 00:14:09.717 "thread": "nvmf_tgt_poll_group_000" 00:14:09.717 } 00:14:09.717 ]' 00:14:09.717 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:09.975 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:09.975 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:09.976 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:09.976 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:09.976 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.976 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.976 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.234 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:14:10.234 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:14:10.801 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.801 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:14:10.801 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.801 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.801 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.801 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:10.801 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:10.801 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:11.367 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:14:11.367 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:11.367 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:11.367 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:11.367 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:11.367 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.367 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key3 00:14:11.367 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.367 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.367 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.367 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:11.367 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:11.367 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:11.625 00:14:11.625 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:11.625 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:11.625 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.883 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.883 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.883 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.883 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.883 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.883 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:11.883 { 00:14:11.883 "auth": { 00:14:11.883 "dhgroup": "ffdhe6144", 00:14:11.883 "digest": "sha512", 00:14:11.883 "state": "completed" 00:14:11.883 }, 00:14:11.884 "cntlid": 135, 00:14:11.884 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:14:11.884 "listen_address": { 00:14:11.884 "adrfam": "IPv4", 00:14:11.884 "traddr": "10.0.0.3", 00:14:11.884 "trsvcid": "4420", 00:14:11.884 "trtype": "TCP" 00:14:11.884 }, 00:14:11.884 "peer_address": { 00:14:11.884 "adrfam": "IPv4", 00:14:11.884 "traddr": "10.0.0.1", 00:14:11.884 "trsvcid": "43614", 00:14:11.884 "trtype": "TCP" 00:14:11.884 }, 00:14:11.884 "qid": 0, 00:14:11.884 "state": "enabled", 00:14:11.884 "thread": "nvmf_tgt_poll_group_000" 00:14:11.884 } 00:14:11.884 ]' 00:14:11.884 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:12.152 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:12.152 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:12.152 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:12.152 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:12.152 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.152 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.152 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.424 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:14:12.424 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:14:12.990 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.990 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:14:12.990 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.990 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.990 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.991 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:12.991 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:12.991 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:12.991 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:13.249 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:14:13.249 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:13.249 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:13.249 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:13.249 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:13.249 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.249 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.249 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.249 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.249 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.249 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.249 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.249 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.815 00:14:14.074 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:14.074 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:14.074 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.332 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.332 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.332 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.332 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.332 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.332 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:14.332 { 00:14:14.332 "auth": { 00:14:14.332 "dhgroup": "ffdhe8192", 00:14:14.332 "digest": "sha512", 00:14:14.332 "state": "completed" 00:14:14.332 }, 00:14:14.332 "cntlid": 137, 00:14:14.332 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:14:14.332 "listen_address": { 00:14:14.332 "adrfam": "IPv4", 00:14:14.332 "traddr": "10.0.0.3", 00:14:14.332 "trsvcid": "4420", 00:14:14.332 "trtype": "TCP" 00:14:14.332 }, 00:14:14.332 "peer_address": { 00:14:14.332 "adrfam": "IPv4", 00:14:14.332 "traddr": "10.0.0.1", 00:14:14.332 "trsvcid": "43644", 00:14:14.332 "trtype": "TCP" 00:14:14.332 }, 00:14:14.332 "qid": 0, 00:14:14.332 "state": "enabled", 00:14:14.332 "thread": "nvmf_tgt_poll_group_000" 00:14:14.332 } 00:14:14.332 ]' 00:14:14.332 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:14.332 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:14.332 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:14.332 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:14.332 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:14.332 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.332 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.332 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.591 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:14:14.591 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:14:15.525 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.525 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:14:15.525 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.525 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.525 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.525 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:15.525 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:15.525 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:15.525 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:14:15.525 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:15.525 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:15.525 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:15.525 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:15.525 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.525 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.525 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.525 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.525 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.525 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.525 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.525 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.459 00:14:16.459 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:16.459 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:16.459 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.717 17:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.717 17:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.717 17:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.717 17:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.717 17:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.717 17:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:16.717 { 00:14:16.717 "auth": { 00:14:16.717 "dhgroup": "ffdhe8192", 00:14:16.717 "digest": "sha512", 00:14:16.717 "state": "completed" 00:14:16.717 }, 00:14:16.717 "cntlid": 139, 00:14:16.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:14:16.717 "listen_address": { 00:14:16.717 "adrfam": "IPv4", 00:14:16.717 "traddr": "10.0.0.3", 00:14:16.717 "trsvcid": "4420", 00:14:16.717 "trtype": "TCP" 00:14:16.717 }, 00:14:16.717 "peer_address": { 00:14:16.717 "adrfam": "IPv4", 00:14:16.717 "traddr": "10.0.0.1", 00:14:16.717 "trsvcid": "43670", 00:14:16.717 "trtype": "TCP" 00:14:16.717 }, 00:14:16.717 "qid": 0, 00:14:16.717 "state": "enabled", 00:14:16.717 "thread": "nvmf_tgt_poll_group_000" 00:14:16.717 } 00:14:16.717 ]' 00:14:16.717 17:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:16.717 17:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:16.717 17:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:16.717 17:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:16.717 17:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:16.717 17:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.717 17:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.717 17:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.283 17:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:14:17.283 17:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: --dhchap-ctrl-secret DHHC-1:02:OTMyZDY5MWMxNGUzMDE0NzRjODg2Mzg3YTMxMjM4NjQzNjcyZTI3ZTI1OTk0OTBmnh2fhA==: 00:14:17.848 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.848 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:14:17.848 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.848 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.848 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.848 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:17.848 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:17.848 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:18.105 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:14:18.105 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.105 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:18.105 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:18.105 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:18.105 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.105 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.105 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.105 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.105 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.105 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.105 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.105 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.671 00:14:18.929 17:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:18.929 17:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.929 17:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:19.187 17:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.187 17:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.187 17:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.187 17:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.187 17:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.187 17:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.187 { 00:14:19.187 "auth": { 00:14:19.187 "dhgroup": "ffdhe8192", 00:14:19.187 "digest": "sha512", 00:14:19.187 "state": "completed" 00:14:19.187 }, 00:14:19.187 "cntlid": 141, 00:14:19.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:14:19.187 "listen_address": { 00:14:19.187 "adrfam": "IPv4", 00:14:19.187 "traddr": "10.0.0.3", 00:14:19.187 "trsvcid": "4420", 00:14:19.187 "trtype": "TCP" 00:14:19.187 }, 00:14:19.187 "peer_address": { 00:14:19.187 "adrfam": "IPv4", 00:14:19.187 "traddr": "10.0.0.1", 00:14:19.187 "trsvcid": "33528", 00:14:19.187 "trtype": "TCP" 00:14:19.187 }, 00:14:19.187 "qid": 0, 00:14:19.187 "state": "enabled", 00:14:19.187 "thread": "nvmf_tgt_poll_group_000" 00:14:19.187 } 00:14:19.187 ]' 00:14:19.187 17:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.187 17:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:19.187 17:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:19.187 17:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:19.187 17:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:19.187 17:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.187 17:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.187 17:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.753 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:14:19.753 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:01:ZWYzNWY4ZDAyNTJkZTBjYTgwNmQ1MTVhMjE0M2I1ODFr2cr+: 00:14:20.319 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.319 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:14:20.319 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.319 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.319 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.320 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:20.320 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:20.320 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:20.578 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:14:20.578 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:20.578 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:20.578 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:20.578 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:20.578 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.578 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key3 00:14:20.578 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.578 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.578 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.578 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:20.578 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:20.578 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:21.145 00:14:21.145 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:21.145 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.145 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:21.711 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.711 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.711 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.711 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.711 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.711 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:21.711 { 00:14:21.711 "auth": { 00:14:21.711 "dhgroup": "ffdhe8192", 00:14:21.711 "digest": "sha512", 00:14:21.711 "state": "completed" 00:14:21.711 }, 00:14:21.711 "cntlid": 143, 00:14:21.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:14:21.711 "listen_address": { 00:14:21.711 "adrfam": "IPv4", 00:14:21.711 "traddr": "10.0.0.3", 00:14:21.711 "trsvcid": "4420", 00:14:21.711 "trtype": "TCP" 00:14:21.711 }, 00:14:21.711 "peer_address": { 00:14:21.711 "adrfam": "IPv4", 00:14:21.711 "traddr": "10.0.0.1", 00:14:21.711 "trsvcid": "33556", 00:14:21.711 "trtype": "TCP" 00:14:21.711 }, 00:14:21.711 "qid": 0, 00:14:21.711 "state": "enabled", 00:14:21.711 "thread": "nvmf_tgt_poll_group_000" 00:14:21.711 } 00:14:21.711 ]' 00:14:21.711 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:21.711 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:21.711 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:21.711 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:21.711 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:21.711 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.711 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.711 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.969 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:14:21.969 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:14:22.534 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.534 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:14:22.534 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.534 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.534 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.534 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:14:22.535 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:14:22.535 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:14:22.535 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:22.535 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:22.535 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:22.792 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:14:22.792 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:22.792 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:22.792 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:22.792 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:22.792 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.792 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.792 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.792 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.792 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.792 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.792 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.793 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.743 00:14:23.743 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:23.743 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:23.743 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.743 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.743 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.743 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.743 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.743 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.743 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:23.743 { 00:14:23.743 "auth": { 00:14:23.743 "dhgroup": "ffdhe8192", 00:14:23.743 "digest": "sha512", 00:14:23.743 "state": "completed" 00:14:23.743 }, 00:14:23.743 "cntlid": 145, 00:14:23.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:14:23.743 "listen_address": { 00:14:23.743 "adrfam": "IPv4", 00:14:23.743 "traddr": "10.0.0.3", 00:14:23.743 "trsvcid": "4420", 00:14:23.743 "trtype": "TCP" 00:14:23.743 }, 00:14:23.743 "peer_address": { 00:14:23.743 "adrfam": "IPv4", 00:14:23.743 "traddr": "10.0.0.1", 00:14:23.743 "trsvcid": "33576", 00:14:23.743 "trtype": "TCP" 00:14:23.743 }, 00:14:23.743 "qid": 0, 00:14:23.743 "state": "enabled", 00:14:23.743 "thread": "nvmf_tgt_poll_group_000" 00:14:23.743 } 00:14:23.743 ]' 00:14:23.743 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:24.002 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:24.002 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:24.002 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:24.002 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:24.002 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.002 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.002 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.262 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:14:24.262 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:00:NTU3YWJhYWE1NTI2Y2QzZGZlY2YzOGZhMWM3YjRkYzk5N2YzODhhMTk4YzIyZmYzQ6nJtg==: --dhchap-ctrl-secret DHHC-1:03:YzA1YmQ3OGFlMmUxMTQ1ZjY1Y2EyYmI3Y2M0ZWVhMWU3MjVkNTUzNmJhMTMyNTU3ZDE5MTAwNGQ5NzY5YTlhM2d0g88=: 00:14:24.830 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.830 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:14:24.830 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.830 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.830 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.831 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key1 00:14:24.831 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.831 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.831 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.831 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:14:24.831 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:24.831 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:14:24.831 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:24.831 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:24.831 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:24.831 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:24.831 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:14:24.831 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:14:24.831 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:14:25.397 2024/11/19 17:13:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:25.397 request: 00:14:25.397 { 00:14:25.397 "method": "bdev_nvme_attach_controller", 00:14:25.397 "params": { 00:14:25.397 "name": "nvme0", 00:14:25.397 "trtype": "tcp", 00:14:25.397 "traddr": "10.0.0.3", 00:14:25.397 "adrfam": "ipv4", 00:14:25.397 "trsvcid": "4420", 00:14:25.397 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:25.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:14:25.397 "prchk_reftag": false, 00:14:25.397 "prchk_guard": false, 00:14:25.397 "hdgst": false, 00:14:25.397 "ddgst": false, 00:14:25.397 "dhchap_key": "key2", 00:14:25.397 "allow_unrecognized_csi": false 00:14:25.397 } 00:14:25.397 } 00:14:25.397 Got JSON-RPC error response 00:14:25.397 GoRPCClient: error on JSON-RPC call 00:14:25.397 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:25.397 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:25.397 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:25.397 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:25.397 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:14:25.397 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.397 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.397 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.397 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.397 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.397 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.397 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.397 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:25.397 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:25.397 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:25.397 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:25.397 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.397 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:25.397 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.397 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:25.397 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:25.397 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:25.964 2024/11/19 17:13:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:25.964 request: 00:14:25.964 { 00:14:25.964 "method": "bdev_nvme_attach_controller", 00:14:25.964 "params": { 00:14:25.964 "name": "nvme0", 00:14:25.964 "trtype": "tcp", 00:14:25.964 "traddr": "10.0.0.3", 00:14:25.964 "adrfam": "ipv4", 00:14:25.965 "trsvcid": "4420", 00:14:25.965 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:25.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:14:25.965 "prchk_reftag": false, 00:14:25.965 "prchk_guard": false, 00:14:25.965 "hdgst": false, 00:14:25.965 "ddgst": false, 00:14:25.965 "dhchap_key": "key1", 00:14:25.965 "dhchap_ctrlr_key": "ckey2", 00:14:25.965 "allow_unrecognized_csi": false 00:14:25.965 } 00:14:25.965 } 00:14:25.965 Got JSON-RPC error response 00:14:25.965 GoRPCClient: error on JSON-RPC call 00:14:25.965 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:25.965 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:25.965 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:25.965 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:25.965 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:14:25.965 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.965 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.965 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.965 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key1 00:14:25.965 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.965 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.965 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.965 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.965 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:25.965 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.965 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:25.965 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.965 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:25.965 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.965 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.965 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.965 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.531 2024/11/19 17:13:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:26.531 request: 00:14:26.531 { 00:14:26.531 "method": "bdev_nvme_attach_controller", 00:14:26.531 "params": { 00:14:26.531 "name": "nvme0", 00:14:26.531 "trtype": "tcp", 00:14:26.531 "traddr": "10.0.0.3", 00:14:26.531 "adrfam": "ipv4", 00:14:26.531 "trsvcid": "4420", 00:14:26.531 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:26.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:14:26.531 "prchk_reftag": false, 00:14:26.531 "prchk_guard": false, 00:14:26.531 "hdgst": false, 00:14:26.531 "ddgst": false, 00:14:26.531 "dhchap_key": "key1", 00:14:26.531 "dhchap_ctrlr_key": "ckey1", 00:14:26.531 "allow_unrecognized_csi": false 00:14:26.531 } 00:14:26.531 } 00:14:26.531 Got JSON-RPC error response 00:14:26.531 GoRPCClient: error on JSON-RPC call 00:14:26.531 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:26.531 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:26.531 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:26.531 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:26.531 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:14:26.531 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.531 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.531 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.531 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 76609 00:14:26.531 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 76609 ']' 00:14:26.531 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 76609 00:14:26.531 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:26.531 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:26.531 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76609 00:14:26.531 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:26.531 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:26.531 killing process with pid 76609 00:14:26.531 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76609' 00:14:26.531 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 76609 00:14:26.531 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 76609 00:14:26.789 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:14:26.789 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:26.790 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:26.790 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.790 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=81536 00:14:26.790 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:14:26.790 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 81536 00:14:26.790 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81536 ']' 00:14:26.790 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.790 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:26.790 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.790 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:26.790 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.163 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:28.163 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:28.163 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:28.163 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:28.163 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.163 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.163 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:28.163 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 81536 00:14:28.163 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81536 ']' 00:14:28.163 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.163 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:28.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.163 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.163 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:28.163 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.163 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:28.163 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:28.163 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:14:28.163 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.163 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.422 null0 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.0Zs 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.XBl ]] 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.XBl 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Dcd 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.uri ]] 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uri 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.9T2 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.7FI ]] 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7FI 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.obS 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key3 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:28.422 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:29.357 nvme0n1 00:14:29.357 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.357 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.357 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:29.616 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.616 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.616 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.616 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.616 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.616 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:29.616 { 00:14:29.616 "auth": { 00:14:29.616 "dhgroup": "ffdhe8192", 00:14:29.616 "digest": "sha512", 00:14:29.616 "state": "completed" 00:14:29.616 }, 00:14:29.616 "cntlid": 1, 00:14:29.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:14:29.616 "listen_address": { 00:14:29.616 "adrfam": "IPv4", 00:14:29.616 "traddr": "10.0.0.3", 00:14:29.616 "trsvcid": "4420", 00:14:29.616 "trtype": "TCP" 00:14:29.616 }, 00:14:29.616 "peer_address": { 00:14:29.616 "adrfam": "IPv4", 00:14:29.616 "traddr": "10.0.0.1", 00:14:29.616 "trsvcid": "49132", 00:14:29.616 "trtype": "TCP" 00:14:29.616 }, 00:14:29.616 "qid": 0, 00:14:29.616 "state": "enabled", 00:14:29.616 "thread": "nvmf_tgt_poll_group_000" 00:14:29.616 } 00:14:29.616 ]' 00:14:29.616 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:29.875 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:29.875 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:29.875 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:29.875 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:29.875 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.875 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.875 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.133 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:14:30.133 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:14:30.700 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.700 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:14:30.700 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.700 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.700 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.700 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key3 00:14:30.700 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.700 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.700 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.700 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:14:30.700 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:14:30.958 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:14:30.958 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:30.958 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:14:30.958 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:30.958 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:30.958 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:30.958 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:30.958 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:30.958 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:30.959 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:31.218 2024/11/19 17:13:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:31.218 request: 00:14:31.218 { 00:14:31.218 "method": "bdev_nvme_attach_controller", 00:14:31.218 "params": { 00:14:31.218 "name": "nvme0", 00:14:31.218 "trtype": "tcp", 00:14:31.218 "traddr": "10.0.0.3", 00:14:31.218 "adrfam": "ipv4", 00:14:31.218 "trsvcid": "4420", 00:14:31.218 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:31.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:14:31.218 "prchk_reftag": false, 00:14:31.218 "prchk_guard": false, 00:14:31.218 "hdgst": false, 00:14:31.218 "ddgst": false, 00:14:31.218 "dhchap_key": "key3", 00:14:31.218 "allow_unrecognized_csi": false 00:14:31.218 } 00:14:31.218 } 00:14:31.218 Got JSON-RPC error response 00:14:31.218 GoRPCClient: error on JSON-RPC call 00:14:31.218 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:31.218 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:31.218 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:31.218 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:31.218 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:14:31.218 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:14:31.218 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:31.218 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:31.476 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:14:31.476 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:31.476 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:14:31.476 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:31.476 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.476 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:31.476 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.476 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:31.476 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:31.477 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:31.735 2024/11/19 17:13:24 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:31.735 request: 00:14:31.735 { 00:14:31.735 "method": "bdev_nvme_attach_controller", 00:14:31.735 "params": { 00:14:31.735 "name": "nvme0", 00:14:31.735 "trtype": "tcp", 00:14:31.735 "traddr": "10.0.0.3", 00:14:31.735 "adrfam": "ipv4", 00:14:31.735 "trsvcid": "4420", 00:14:31.735 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:31.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:14:31.735 "prchk_reftag": false, 00:14:31.735 "prchk_guard": false, 00:14:31.735 "hdgst": false, 00:14:31.735 "ddgst": false, 00:14:31.735 "dhchap_key": "key3", 00:14:31.735 "allow_unrecognized_csi": false 00:14:31.735 } 00:14:31.735 } 00:14:31.735 Got JSON-RPC error response 00:14:31.735 GoRPCClient: error on JSON-RPC call 00:14:31.735 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:31.735 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:31.735 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:31.735 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:31.735 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:14:31.735 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:14:31.993 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:14:31.993 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:31.993 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:31.993 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:32.251 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:14:32.251 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.251 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.251 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.251 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:14:32.251 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.251 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.251 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.251 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:32.251 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:32.251 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:32.251 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:32.251 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:32.251 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:32.251 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:32.251 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:32.251 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:32.251 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:32.818 2024/11/19 17:13:25 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:32.818 request: 00:14:32.818 { 00:14:32.818 "method": "bdev_nvme_attach_controller", 00:14:32.818 "params": { 00:14:32.818 "name": "nvme0", 00:14:32.818 "trtype": "tcp", 00:14:32.818 "traddr": "10.0.0.3", 00:14:32.818 "adrfam": "ipv4", 00:14:32.818 "trsvcid": "4420", 00:14:32.818 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:32.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:14:32.818 "prchk_reftag": false, 00:14:32.818 "prchk_guard": false, 00:14:32.818 "hdgst": false, 00:14:32.818 "ddgst": false, 00:14:32.818 "dhchap_key": "key0", 00:14:32.818 "dhchap_ctrlr_key": "key1", 00:14:32.818 "allow_unrecognized_csi": false 00:14:32.818 } 00:14:32.818 } 00:14:32.818 Got JSON-RPC error response 00:14:32.818 GoRPCClient: error on JSON-RPC call 00:14:32.818 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:32.818 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:32.818 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:32.818 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:32.818 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:14:32.818 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:14:32.818 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:14:33.077 nvme0n1 00:14:33.077 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:14:33.077 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:14:33.077 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.335 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.335 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.335 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.594 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key1 00:14:33.594 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.594 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.594 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.594 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:14:33.594 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:33.594 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:34.529 nvme0n1 00:14:34.529 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:14:34.529 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:14:34.529 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.529 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.529 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:34.529 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.529 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.529 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.529 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:14:34.529 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.529 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:14:34.788 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.788 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:14:34.788 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid 838cb2d2-b846-4910-b58d-4ab4444a5e07 -l 0 --dhchap-secret DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: --dhchap-ctrl-secret DHHC-1:03:NGJiNWQ2Yjg4YTFhYTdhNDZhN2ViZDU0YjQ3M2YzMjdmMTE0MjZmMzZjOWJjYTQ0MjU5NGQwZmE0OTVmZTU5NsPKk0w=: 00:14:35.727 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:14:35.727 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:14:35.727 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:14:35.727 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:14:35.727 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:14:35.727 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:14:35.727 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:14:35.727 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.727 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.727 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:14:35.727 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:35.727 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:14:35.727 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:35.727 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:35.727 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:35.727 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:35.727 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:14:35.727 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:35.727 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:36.293 2024/11/19 17:13:28 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:36.293 request: 00:14:36.293 { 00:14:36.293 "method": "bdev_nvme_attach_controller", 00:14:36.293 "params": { 00:14:36.293 "name": "nvme0", 00:14:36.293 "trtype": "tcp", 00:14:36.293 "traddr": "10.0.0.3", 00:14:36.293 "adrfam": "ipv4", 00:14:36.293 "trsvcid": "4420", 00:14:36.293 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:36.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07", 00:14:36.293 "prchk_reftag": false, 00:14:36.293 "prchk_guard": false, 00:14:36.293 "hdgst": false, 00:14:36.293 "ddgst": false, 00:14:36.293 "dhchap_key": "key1", 00:14:36.293 "allow_unrecognized_csi": false 00:14:36.293 } 00:14:36.293 } 00:14:36.293 Got JSON-RPC error response 00:14:36.293 GoRPCClient: error on JSON-RPC call 00:14:36.293 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:36.293 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:36.293 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:36.293 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:36.293 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:36.293 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:36.293 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:37.227 nvme0n1 00:14:37.227 17:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:14:37.227 17:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:14:37.227 17:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.486 17:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.486 17:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.486 17:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.052 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:14:38.052 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.052 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.052 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.052 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:14:38.052 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:14:38.052 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:14:38.310 nvme0n1 00:14:38.310 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:14:38.310 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.310 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:14:38.569 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.569 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.569 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.827 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:38.827 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.827 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.827 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.827 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: '' 2s 00:14:38.827 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:38.827 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:38.827 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: 00:14:38.827 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:14:38.827 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:38.827 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:38.827 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: ]] 00:14:38.827 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MTU1MDdiN2U0YzIxNjM3YWNhNjQ3Yjc0NzI0MzYxNTVkj6wl: 00:14:38.828 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:14:38.828 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:38.828 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:41.358 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:14:41.358 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:14:41.358 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:41.358 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:41.358 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:41.358 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:41.358 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:14:41.358 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key1 --dhchap-ctrlr-key key2 00:14:41.358 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.358 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.358 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.358 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: 2s 00:14:41.358 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:41.358 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:41.358 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:14:41.358 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: 00:14:41.358 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:41.358 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:41.358 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:14:41.358 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: ]] 00:14:41.358 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZGI4OGNhMzNjY2MyYzliZGFjMGQ4YTgwM2VjZjg3M2RlN2FlN2M1MzI0NGZhZWQ4zQfizQ==: 00:14:41.358 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:41.358 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:43.257 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:14:43.258 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:14:43.258 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:43.258 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:43.258 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:43.258 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:43.258 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:14:43.258 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.258 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:43.258 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.258 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.258 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.258 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:43.258 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:43.258 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:44.191 nvme0n1 00:14:44.191 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:44.191 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.191 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.191 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.191 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:44.191 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:44.755 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:14:44.755 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:14:44.755 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.013 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.013 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:14:45.013 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.013 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.013 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.013 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:14:45.013 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:14:45.270 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:14:45.270 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:14:45.270 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.528 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.528 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:45.528 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.528 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.528 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.528 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:45.528 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:45.528 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:45.528 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:14:45.528 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:45.528 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:14:45.528 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:45.528 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:45.528 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:46.161 2024/11/19 17:13:38 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key3 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:14:46.161 request: 00:14:46.161 { 00:14:46.161 "method": "bdev_nvme_set_keys", 00:14:46.161 "params": { 00:14:46.161 "name": "nvme0", 00:14:46.161 "dhchap_key": "key1", 00:14:46.161 "dhchap_ctrlr_key": "key3" 00:14:46.161 } 00:14:46.161 } 00:14:46.161 Got JSON-RPC error response 00:14:46.161 GoRPCClient: error on JSON-RPC call 00:14:46.161 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:46.161 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:46.161 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:46.161 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:46.161 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:46.161 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.161 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:46.418 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:14:46.418 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:14:47.791 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:47.791 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:47.791 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.791 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:14:47.791 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:47.791 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.791 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.791 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.791 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:47.791 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:47.791 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:48.724 nvme0n1 00:14:48.724 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:48.724 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.724 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.724 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.724 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:48.724 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:48.724 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:48.724 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:14:48.724 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.724 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:14:48.724 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.724 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:48.724 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:49.657 2024/11/19 17:13:41 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key0 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:14:49.657 request: 00:14:49.657 { 00:14:49.657 "method": "bdev_nvme_set_keys", 00:14:49.657 "params": { 00:14:49.657 "name": "nvme0", 00:14:49.657 "dhchap_key": "key2", 00:14:49.657 "dhchap_ctrlr_key": "key0" 00:14:49.657 } 00:14:49.657 } 00:14:49.657 Got JSON-RPC error response 00:14:49.657 GoRPCClient: error on JSON-RPC call 00:14:49.657 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:49.657 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:49.657 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:49.657 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:49.657 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:49.657 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:49.657 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.657 17:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:14:49.657 17:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:14:51.030 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:51.030 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:51.030 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.030 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:14:51.030 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:14:51.030 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:14:51.030 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 76653 00:14:51.030 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 76653 ']' 00:14:51.030 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 76653 00:14:51.030 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:51.030 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:51.030 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76653 00:14:51.030 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:51.030 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:51.030 killing process with pid 76653 00:14:51.030 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76653' 00:14:51.030 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 76653 00:14:51.030 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 76653 00:14:51.596 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:51.596 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:51.596 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:14:51.596 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:51.596 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:14:51.596 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:51.596 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:51.596 rmmod nvme_tcp 00:14:51.596 rmmod nvme_fabrics 00:14:51.596 rmmod nvme_keyring 00:14:51.596 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:51.596 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:14:51.596 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:14:51.596 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 81536 ']' 00:14:51.596 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 81536 00:14:51.596 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 81536 ']' 00:14:51.596 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 81536 00:14:51.596 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:51.596 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:51.596 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81536 00:14:51.596 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:51.596 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:51.596 killing process with pid 81536 00:14:51.596 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81536' 00:14:51.596 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 81536 00:14:51.596 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 81536 00:14:51.855 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:51.855 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:51.855 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:51.855 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:14:51.855 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:14:51.855 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:51.855 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:14:51.855 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:51.855 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:51.855 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:51.855 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:51.855 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:51.855 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:51.855 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:51.855 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:51.855 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:51.855 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:51.855 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:52.113 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:52.113 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:52.113 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:52.113 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:52.113 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:52.113 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.113 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.113 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.113 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:14:52.113 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.0Zs /tmp/spdk.key-sha256.Dcd /tmp/spdk.key-sha384.9T2 /tmp/spdk.key-sha512.obS /tmp/spdk.key-sha512.XBl /tmp/spdk.key-sha384.uri /tmp/spdk.key-sha256.7FI '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:52.113 00:14:52.113 real 3m13.388s 00:14:52.113 user 7m50.193s 00:14:52.113 sys 0m24.881s 00:14:52.113 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:52.113 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.113 ************************************ 00:14:52.113 END TEST nvmf_auth_target 00:14:52.113 ************************************ 00:14:52.113 17:13:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:14:52.113 17:13:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:52.113 17:13:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:52.113 17:13:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:52.113 17:13:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:52.113 ************************************ 00:14:52.113 START TEST nvmf_bdevio_no_huge 00:14:52.113 ************************************ 00:14:52.113 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:52.373 * Looking for test storage... 00:14:52.373 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:52.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.373 --rc genhtml_branch_coverage=1 00:14:52.373 --rc genhtml_function_coverage=1 00:14:52.373 --rc genhtml_legend=1 00:14:52.373 --rc geninfo_all_blocks=1 00:14:52.373 --rc geninfo_unexecuted_blocks=1 00:14:52.373 00:14:52.373 ' 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:52.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.373 --rc genhtml_branch_coverage=1 00:14:52.373 --rc genhtml_function_coverage=1 00:14:52.373 --rc genhtml_legend=1 00:14:52.373 --rc geninfo_all_blocks=1 00:14:52.373 --rc geninfo_unexecuted_blocks=1 00:14:52.373 00:14:52.373 ' 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:52.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.373 --rc genhtml_branch_coverage=1 00:14:52.373 --rc genhtml_function_coverage=1 00:14:52.373 --rc genhtml_legend=1 00:14:52.373 --rc geninfo_all_blocks=1 00:14:52.373 --rc geninfo_unexecuted_blocks=1 00:14:52.373 00:14:52.373 ' 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:52.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.373 --rc genhtml_branch_coverage=1 00:14:52.373 --rc genhtml_function_coverage=1 00:14:52.373 --rc genhtml_legend=1 00:14:52.373 --rc geninfo_all_blocks=1 00:14:52.373 --rc geninfo_unexecuted_blocks=1 00:14:52.373 00:14:52.373 ' 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:14:52.373 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:52.374 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:52.374 Cannot find device "nvmf_init_br" 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:52.374 Cannot find device "nvmf_init_br2" 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:52.374 Cannot find device "nvmf_tgt_br" 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:52.374 Cannot find device "nvmf_tgt_br2" 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:52.374 Cannot find device "nvmf_init_br" 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:52.374 Cannot find device "nvmf_init_br2" 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:52.374 Cannot find device "nvmf_tgt_br" 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:52.374 Cannot find device "nvmf_tgt_br2" 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:52.374 Cannot find device "nvmf_br" 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:14:52.374 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:52.633 Cannot find device "nvmf_init_if" 00:14:52.633 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:14:52.633 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:52.633 Cannot find device "nvmf_init_if2" 00:14:52.633 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:14:52.633 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:52.633 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.633 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:14:52.633 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:52.633 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.633 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:14:52.633 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:52.633 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:52.633 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:52.633 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:52.633 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:52.633 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:52.633 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:52.633 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:52.633 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:52.633 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:52.633 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:52.633 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:52.633 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:52.633 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:52.633 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:52.633 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:52.633 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:52.633 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:52.633 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:52.633 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:52.633 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:52.633 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:52.633 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:52.633 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:52.633 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:52.633 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:52.633 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:52.633 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:52.633 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:52.633 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:52.633 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:52.633 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:52.633 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:52.633 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:52.633 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:14:52.633 00:14:52.633 --- 10.0.0.3 ping statistics --- 00:14:52.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.633 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:14:52.633 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:52.633 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:52.633 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:14:52.633 00:14:52.633 --- 10.0.0.4 ping statistics --- 00:14:52.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.633 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:14:52.633 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:52.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:52.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:14:52.892 00:14:52.892 --- 10.0.0.1 ping statistics --- 00:14:52.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.892 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:14:52.892 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:52.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:52.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:14:52.892 00:14:52.892 --- 10.0.0.2 ping statistics --- 00:14:52.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.892 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:52.892 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:52.892 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:14:52.892 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:52.892 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:52.892 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:52.892 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:52.892 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:52.892 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:52.892 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:52.892 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:52.892 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:52.892 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:52.892 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:52.892 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=82399 00:14:52.892 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:52.892 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 82399 00:14:52.892 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 82399 ']' 00:14:52.892 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.892 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:52.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.892 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.892 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:52.892 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:52.892 [2024-11-19 17:13:45.267800] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:14:52.892 [2024-11-19 17:13:45.267901] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:53.150 [2024-11-19 17:13:45.436159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:53.150 [2024-11-19 17:13:45.516912] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.150 [2024-11-19 17:13:45.516982] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.150 [2024-11-19 17:13:45.517007] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.150 [2024-11-19 17:13:45.517018] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.150 [2024-11-19 17:13:45.517028] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.150 [2024-11-19 17:13:45.517861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:53.150 [2024-11-19 17:13:45.518004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:53.150 [2024-11-19 17:13:45.518006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:53.150 [2024-11-19 17:13:45.517925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:54.086 [2024-11-19 17:13:46.401705] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:54.086 Malloc0 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:54.086 [2024-11-19 17:13:46.450027] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:54.086 { 00:14:54.086 "params": { 00:14:54.086 "name": "Nvme$subsystem", 00:14:54.086 "trtype": "$TEST_TRANSPORT", 00:14:54.086 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:54.086 "adrfam": "ipv4", 00:14:54.086 "trsvcid": "$NVMF_PORT", 00:14:54.086 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:54.086 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:54.086 "hdgst": ${hdgst:-false}, 00:14:54.086 "ddgst": ${ddgst:-false} 00:14:54.086 }, 00:14:54.086 "method": "bdev_nvme_attach_controller" 00:14:54.086 } 00:14:54.086 EOF 00:14:54.086 )") 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:14:54.086 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:54.086 "params": { 00:14:54.086 "name": "Nvme1", 00:14:54.086 "trtype": "tcp", 00:14:54.086 "traddr": "10.0.0.3", 00:14:54.086 "adrfam": "ipv4", 00:14:54.086 "trsvcid": "4420", 00:14:54.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:54.086 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:54.086 "hdgst": false, 00:14:54.086 "ddgst": false 00:14:54.086 }, 00:14:54.086 "method": "bdev_nvme_attach_controller" 00:14:54.086 }' 00:14:54.086 [2024-11-19 17:13:46.516041] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:14:54.087 [2024-11-19 17:13:46.516135] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82453 ] 00:14:54.345 [2024-11-19 17:13:46.678832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:54.345 [2024-11-19 17:13:46.774876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.345 [2024-11-19 17:13:46.774993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.345 [2024-11-19 17:13:46.775008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.603 I/O targets: 00:14:54.603 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:54.603 00:14:54.603 00:14:54.603 CUnit - A unit testing framework for C - Version 2.1-3 00:14:54.603 http://cunit.sourceforge.net/ 00:14:54.603 00:14:54.603 00:14:54.603 Suite: bdevio tests on: Nvme1n1 00:14:54.603 Test: blockdev write read block ...passed 00:14:54.603 Test: blockdev write zeroes read block ...passed 00:14:54.603 Test: blockdev write zeroes read no split ...passed 00:14:54.861 Test: blockdev write zeroes read split ...passed 00:14:54.861 Test: blockdev write zeroes read split partial ...passed 00:14:54.861 Test: blockdev reset ...[2024-11-19 17:13:47.157736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:54.861 [2024-11-19 17:13:47.157911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2a380 (9): Bad file descriptor 00:14:54.861 [2024-11-19 17:13:47.176463] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:14:54.861 passed 00:14:54.861 Test: blockdev write read 8 blocks ...passed 00:14:54.861 Test: blockdev write read size > 128k ...passed 00:14:54.861 Test: blockdev write read invalid size ...passed 00:14:54.861 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:54.861 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:54.861 Test: blockdev write read max offset ...passed 00:14:54.861 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:54.861 Test: blockdev writev readv 8 blocks ...passed 00:14:54.861 Test: blockdev writev readv 30 x 1block ...passed 00:14:54.861 Test: blockdev writev readv block ...passed 00:14:54.861 Test: blockdev writev readv size > 128k ...passed 00:14:54.861 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:54.861 Test: blockdev comparev and writev ...[2024-11-19 17:13:47.355673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:54.861 [2024-11-19 17:13:47.355731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:54.861 [2024-11-19 17:13:47.355815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:54.861 [2024-11-19 17:13:47.355829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:54.861 [2024-11-19 17:13:47.356312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:54.861 [2024-11-19 17:13:47.356341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:54.861 [2024-11-19 17:13:47.356360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:54.861 [2024-11-19 17:13:47.356370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:54.861 [2024-11-19 17:13:47.356757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:54.861 [2024-11-19 17:13:47.356844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:54.861 [2024-11-19 17:13:47.356864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:54.861 [2024-11-19 17:13:47.356875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:54.861 [2024-11-19 17:13:47.357428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:54.861 [2024-11-19 17:13:47.357470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:54.861 [2024-11-19 17:13:47.357487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:54.861 [2024-11-19 17:13:47.357497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:55.119 passed 00:14:55.119 Test: blockdev nvme passthru rw ...passed 00:14:55.119 Test: blockdev nvme passthru vendor specific ...[2024-11-19 17:13:47.441157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:55.119 [2024-11-19 17:13:47.441184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:55.119 [2024-11-19 17:13:47.441616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:55.119 [2024-11-19 17:13:47.441645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:55.119 [2024-11-19 17:13:47.441903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:55.119 [2024-11-19 17:13:47.442016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:55.119 [2024-11-19 17:13:47.442309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:55.119 [2024-11-19 17:13:47.442337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:55.119 passed 00:14:55.119 Test: blockdev nvme admin passthru ...passed 00:14:55.119 Test: blockdev copy ...passed 00:14:55.119 00:14:55.120 Run Summary: Type Total Ran Passed Failed Inactive 00:14:55.120 suites 1 1 n/a 0 0 00:14:55.120 tests 23 23 23 0 0 00:14:55.120 asserts 152 152 152 0 n/a 00:14:55.120 00:14:55.120 Elapsed time = 0.948 seconds 00:14:55.687 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:55.687 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.687 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:55.687 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.687 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:55.687 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:55.687 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:55.687 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:14:55.687 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:55.687 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:14:55.687 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:55.687 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:55.687 rmmod nvme_tcp 00:14:55.687 rmmod nvme_fabrics 00:14:55.687 rmmod nvme_keyring 00:14:55.687 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:55.687 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:14:55.687 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:14:55.687 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 82399 ']' 00:14:55.687 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 82399 00:14:55.687 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 82399 ']' 00:14:55.687 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 82399 00:14:55.687 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:14:55.687 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:55.687 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82399 00:14:55.687 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:14:55.687 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:14:55.687 killing process with pid 82399 00:14:55.687 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82399' 00:14:55.687 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 82399 00:14:55.687 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 82399 00:14:55.946 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:55.946 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:55.946 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:55.946 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:14:55.946 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:14:55.946 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:14:55.946 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:55.946 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:55.946 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:55.946 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:56.204 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:56.204 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:56.204 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:56.204 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:56.204 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:56.204 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:56.204 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:56.204 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:56.204 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:56.204 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:56.204 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:56.204 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:56.204 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:56.204 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.204 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:56.204 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.204 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:14:56.204 00:14:56.205 real 0m4.144s 00:14:56.205 user 0m14.283s 00:14:56.205 sys 0m1.572s 00:14:56.205 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:56.205 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:56.205 ************************************ 00:14:56.205 END TEST nvmf_bdevio_no_huge 00:14:56.205 ************************************ 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:56.504 ************************************ 00:14:56.504 START TEST nvmf_tls 00:14:56.504 ************************************ 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:56.504 * Looking for test storage... 00:14:56.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:56.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.504 --rc genhtml_branch_coverage=1 00:14:56.504 --rc genhtml_function_coverage=1 00:14:56.504 --rc genhtml_legend=1 00:14:56.504 --rc geninfo_all_blocks=1 00:14:56.504 --rc geninfo_unexecuted_blocks=1 00:14:56.504 00:14:56.504 ' 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:56.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.504 --rc genhtml_branch_coverage=1 00:14:56.504 --rc genhtml_function_coverage=1 00:14:56.504 --rc genhtml_legend=1 00:14:56.504 --rc geninfo_all_blocks=1 00:14:56.504 --rc geninfo_unexecuted_blocks=1 00:14:56.504 00:14:56.504 ' 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:56.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.504 --rc genhtml_branch_coverage=1 00:14:56.504 --rc genhtml_function_coverage=1 00:14:56.504 --rc genhtml_legend=1 00:14:56.504 --rc geninfo_all_blocks=1 00:14:56.504 --rc geninfo_unexecuted_blocks=1 00:14:56.504 00:14:56.504 ' 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:56.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.504 --rc genhtml_branch_coverage=1 00:14:56.504 --rc genhtml_function_coverage=1 00:14:56.504 --rc genhtml_legend=1 00:14:56.504 --rc geninfo_all_blocks=1 00:14:56.504 --rc geninfo_unexecuted_blocks=1 00:14:56.504 00:14:56.504 ' 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.504 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:56.505 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:56.505 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.505 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:56.505 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:56.505 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:56.505 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:56.505 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:56.505 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:56.505 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:56.505 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:56.505 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:56.505 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:56.505 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.505 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:56.505 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:56.505 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:56.505 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:56.505 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:56.505 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:56.505 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.505 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:56.505 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:56.505 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:56.505 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:56.505 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:56.798 Cannot find device "nvmf_init_br" 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:56.798 Cannot find device "nvmf_init_br2" 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:56.798 Cannot find device "nvmf_tgt_br" 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:56.798 Cannot find device "nvmf_tgt_br2" 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:56.798 Cannot find device "nvmf_init_br" 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:56.798 Cannot find device "nvmf_init_br2" 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:56.798 Cannot find device "nvmf_tgt_br" 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:56.798 Cannot find device "nvmf_tgt_br2" 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:56.798 Cannot find device "nvmf_br" 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:56.798 Cannot find device "nvmf_init_if" 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:56.798 Cannot find device "nvmf_init_if2" 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:56.798 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:56.798 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:56.798 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:56.799 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:56.799 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:57.057 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:57.057 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.116 ms 00:14:57.057 00:14:57.057 --- 10.0.0.3 ping statistics --- 00:14:57.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.057 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:57.057 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:57.057 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:14:57.057 00:14:57.057 --- 10.0.0.4 ping statistics --- 00:14:57.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.057 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:57.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:57.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:14:57.057 00:14:57.057 --- 10.0.0.1 ping statistics --- 00:14:57.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.057 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:57.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:57.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:14:57.057 00:14:57.057 --- 10.0.0.2 ping statistics --- 00:14:57.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.057 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:57.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=82696 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 82696 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82696 ']' 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:57.057 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:57.058 [2024-11-19 17:13:49.522459] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:14:57.058 [2024-11-19 17:13:49.522757] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.316 [2024-11-19 17:13:49.683702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.316 [2024-11-19 17:13:49.744077] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.316 [2024-11-19 17:13:49.744133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.316 [2024-11-19 17:13:49.744156] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.316 [2024-11-19 17:13:49.744166] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.316 [2024-11-19 17:13:49.744176] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.316 [2024-11-19 17:13:49.744624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.249 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:58.249 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:58.249 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:58.249 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:58.249 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:58.249 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.249 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:14:58.249 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:58.507 true 00:14:58.507 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:14:58.507 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:58.765 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:14:58.765 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:14:58.765 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:59.023 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:59.023 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:14:59.281 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:14:59.281 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:14:59.281 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:59.539 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:59.539 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:14:59.798 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:14:59.798 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:14:59.798 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:59.798 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:15:00.056 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:15:00.056 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:15:00.056 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:15:00.315 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:00.315 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:15:00.572 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:15:00.572 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:15:00.573 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:15:00.830 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:00.830 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:15:01.089 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:15:01.089 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:15:01.089 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:15:01.089 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:15:01.089 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:01.089 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:01.089 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:15:01.089 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:15:01.089 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:01.347 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:01.347 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:15:01.347 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:15:01.347 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:01.347 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:01.347 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:15:01.347 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:15:01.347 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:01.347 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:01.347 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:15:01.347 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.8jQW0zzZR3 00:15:01.347 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:15:01.347 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.7CdtKKIpaD 00:15:01.347 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:01.348 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:01.348 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.8jQW0zzZR3 00:15:01.348 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.7CdtKKIpaD 00:15:01.348 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:01.606 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:01.865 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.8jQW0zzZR3 00:15:01.865 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.8jQW0zzZR3 00:15:01.865 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:02.123 [2024-11-19 17:13:54.552168] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.123 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:02.382 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:02.641 [2024-11-19 17:13:55.124336] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:02.641 [2024-11-19 17:13:55.124606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:02.641 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:02.899 malloc0 00:15:02.899 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:03.158 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.8jQW0zzZR3 00:15:03.416 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:03.675 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.8jQW0zzZR3 00:15:15.957 Initializing NVMe Controllers 00:15:15.957 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:15.957 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:15.957 Initialization complete. Launching workers. 00:15:15.957 ======================================================== 00:15:15.957 Latency(us) 00:15:15.957 Device Information : IOPS MiB/s Average min max 00:15:15.957 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9506.19 37.13 6734.15 1110.69 9871.29 00:15:15.957 ======================================================== 00:15:15.957 Total : 9506.19 37.13 6734.15 1110.69 9871.29 00:15:15.957 00:15:15.957 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8jQW0zzZR3 00:15:15.957 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:15.957 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:15.957 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:15.957 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.8jQW0zzZR3 00:15:15.957 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:15.957 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83069 00:15:15.957 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:15.957 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:15.957 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83069 /var/tmp/bdevperf.sock 00:15:15.957 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83069 ']' 00:15:15.957 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:15.957 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:15.957 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:15.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:15.957 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:15.957 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:15.957 [2024-11-19 17:14:06.445485] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:15:15.957 [2024-11-19 17:14:06.446594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83069 ] 00:15:15.957 [2024-11-19 17:14:06.601824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.957 [2024-11-19 17:14:06.658153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.957 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:15.957 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:15.957 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8jQW0zzZR3 00:15:15.957 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:15.957 [2024-11-19 17:14:07.884937] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:15.957 TLSTESTn1 00:15:15.957 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:15.957 Running I/O for 10 seconds... 00:15:17.846 3712.00 IOPS, 14.50 MiB/s [2024-11-19T17:14:11.316Z] 3675.50 IOPS, 14.36 MiB/s [2024-11-19T17:14:12.251Z] 3840.00 IOPS, 15.00 MiB/s [2024-11-19T17:14:13.187Z] 3951.25 IOPS, 15.43 MiB/s [2024-11-19T17:14:14.123Z] 4029.80 IOPS, 15.74 MiB/s [2024-11-19T17:14:15.499Z] 4076.83 IOPS, 15.93 MiB/s [2024-11-19T17:14:16.108Z] 4119.29 IOPS, 16.09 MiB/s [2024-11-19T17:14:17.485Z] 4149.62 IOPS, 16.21 MiB/s [2024-11-19T17:14:18.421Z] 4174.44 IOPS, 16.31 MiB/s [2024-11-19T17:14:18.421Z] 4202.20 IOPS, 16.41 MiB/s 00:15:25.888 Latency(us) 00:15:25.888 [2024-11-19T17:14:18.421Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.888 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:25.888 Verification LBA range: start 0x0 length 0x2000 00:15:25.888 TLSTESTn1 : 10.01 4208.72 16.44 0.00 0.00 30362.26 4825.83 31218.97 00:15:25.888 [2024-11-19T17:14:18.421Z] =================================================================================================================== 00:15:25.888 [2024-11-19T17:14:18.421Z] Total : 4208.72 16.44 0.00 0.00 30362.26 4825.83 31218.97 00:15:25.888 { 00:15:25.888 "results": [ 00:15:25.888 { 00:15:25.888 "job": "TLSTESTn1", 00:15:25.888 "core_mask": "0x4", 00:15:25.888 "workload": "verify", 00:15:25.888 "status": "finished", 00:15:25.888 "verify_range": { 00:15:25.888 "start": 0, 00:15:25.888 "length": 8192 00:15:25.888 }, 00:15:25.888 "queue_depth": 128, 00:15:25.888 "io_size": 4096, 00:15:25.888 "runtime": 10.014912, 00:15:25.888 "iops": 4208.723950844501, 00:15:25.888 "mibps": 16.44032793298633, 00:15:25.888 "io_failed": 0, 00:15:25.888 "io_timeout": 0, 00:15:25.888 "avg_latency_us": 30362.25952093174, 00:15:25.888 "min_latency_us": 4825.832727272727, 00:15:25.888 "max_latency_us": 31218.967272727274 00:15:25.888 } 00:15:25.888 ], 00:15:25.888 "core_count": 1 00:15:25.888 } 00:15:25.888 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:25.888 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83069 00:15:25.888 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83069 ']' 00:15:25.888 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83069 00:15:25.888 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:25.888 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:25.888 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83069 00:15:25.888 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:25.888 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:25.888 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83069' 00:15:25.888 killing process with pid 83069 00:15:25.888 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83069 00:15:25.888 Received shutdown signal, test time was about 10.000000 seconds 00:15:25.888 00:15:25.888 Latency(us) 00:15:25.888 [2024-11-19T17:14:18.421Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.888 [2024-11-19T17:14:18.421Z] =================================================================================================================== 00:15:25.888 [2024-11-19T17:14:18.421Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:25.888 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83069 00:15:25.888 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7CdtKKIpaD 00:15:25.888 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:25.888 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7CdtKKIpaD 00:15:25.888 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:25.888 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:25.888 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:25.888 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:25.889 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7CdtKKIpaD 00:15:25.889 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:25.889 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:25.889 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:25.889 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.7CdtKKIpaD 00:15:25.889 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:25.889 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83227 00:15:25.889 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:25.889 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:25.889 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83227 /var/tmp/bdevperf.sock 00:15:25.889 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83227 ']' 00:15:25.889 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:25.889 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:25.889 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:25.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:25.889 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:25.889 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:25.889 [2024-11-19 17:14:18.406344] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:15:25.889 [2024-11-19 17:14:18.406453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83227 ] 00:15:26.147 [2024-11-19 17:14:18.552370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.147 [2024-11-19 17:14:18.594524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:26.406 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:26.406 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:26.406 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7CdtKKIpaD 00:15:26.665 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:26.923 [2024-11-19 17:14:19.217550] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:26.923 [2024-11-19 17:14:19.222836] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:26.923 [2024-11-19 17:14:19.223446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x64fac0 (107): Transport endpoint is not connected 00:15:26.923 [2024-11-19 17:14:19.224432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x64fac0 (9): Bad file descriptor 00:15:26.923 [2024-11-19 17:14:19.225427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:15:26.923 [2024-11-19 17:14:19.225466] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:15:26.923 [2024-11-19 17:14:19.225492] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:15:26.923 [2024-11-19 17:14:19.225507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:15:26.923 2024/11/19 17:14:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:26.923 request: 00:15:26.923 { 00:15:26.923 "method": "bdev_nvme_attach_controller", 00:15:26.923 "params": { 00:15:26.923 "name": "TLSTEST", 00:15:26.923 "trtype": "tcp", 00:15:26.923 "traddr": "10.0.0.3", 00:15:26.923 "adrfam": "ipv4", 00:15:26.923 "trsvcid": "4420", 00:15:26.923 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:26.923 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:26.923 "prchk_reftag": false, 00:15:26.923 "prchk_guard": false, 00:15:26.923 "hdgst": false, 00:15:26.923 "ddgst": false, 00:15:26.923 "psk": "key0", 00:15:26.923 "allow_unrecognized_csi": false 00:15:26.923 } 00:15:26.923 } 00:15:26.923 Got JSON-RPC error response 00:15:26.923 GoRPCClient: error on JSON-RPC call 00:15:26.923 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83227 00:15:26.923 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83227 ']' 00:15:26.923 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83227 00:15:26.923 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:26.923 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:26.923 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83227 00:15:26.923 killing process with pid 83227 00:15:26.923 Received shutdown signal, test time was about 10.000000 seconds 00:15:26.923 00:15:26.923 Latency(us) 00:15:26.923 [2024-11-19T17:14:19.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.923 [2024-11-19T17:14:19.456Z] =================================================================================================================== 00:15:26.923 [2024-11-19T17:14:19.456Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:26.923 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:26.923 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:26.923 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83227' 00:15:26.923 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83227 00:15:26.923 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83227 00:15:27.182 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:27.182 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:27.182 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:27.182 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:27.182 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:27.182 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.8jQW0zzZR3 00:15:27.182 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:27.182 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.8jQW0zzZR3 00:15:27.182 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:27.182 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:27.182 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:27.182 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:27.182 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.8jQW0zzZR3 00:15:27.182 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:27.182 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:27.182 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:15:27.182 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.8jQW0zzZR3 00:15:27.182 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:27.182 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83272 00:15:27.182 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:27.182 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:27.182 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83272 /var/tmp/bdevperf.sock 00:15:27.182 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83272 ']' 00:15:27.182 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:27.182 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:27.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:27.182 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:27.182 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:27.182 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:27.182 [2024-11-19 17:14:19.533766] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:15:27.182 [2024-11-19 17:14:19.534528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83272 ] 00:15:27.182 [2024-11-19 17:14:19.681009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.441 [2024-11-19 17:14:19.727269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:27.441 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:27.441 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:27.441 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8jQW0zzZR3 00:15:27.700 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:15:27.960 [2024-11-19 17:14:20.366975] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:27.960 [2024-11-19 17:14:20.372404] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:27.960 [2024-11-19 17:14:20.372458] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:27.960 [2024-11-19 17:14:20.372507] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:27.960 [2024-11-19 17:14:20.373048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x891ac0 (107): Transport endpoint is not connected 00:15:27.960 [2024-11-19 17:14:20.374030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x891ac0 (9): Bad file descriptor 00:15:27.960 [2024-11-19 17:14:20.375025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:15:27.960 [2024-11-19 17:14:20.375055] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:15:27.960 [2024-11-19 17:14:20.375065] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:15:27.960 [2024-11-19 17:14:20.375081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:15:27.960 2024/11/19 17:14:20 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:27.960 request: 00:15:27.960 { 00:15:27.960 "method": "bdev_nvme_attach_controller", 00:15:27.960 "params": { 00:15:27.960 "name": "TLSTEST", 00:15:27.960 "trtype": "tcp", 00:15:27.960 "traddr": "10.0.0.3", 00:15:27.960 "adrfam": "ipv4", 00:15:27.960 "trsvcid": "4420", 00:15:27.960 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:27.960 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:27.960 "prchk_reftag": false, 00:15:27.960 "prchk_guard": false, 00:15:27.960 "hdgst": false, 00:15:27.960 "ddgst": false, 00:15:27.960 "psk": "key0", 00:15:27.960 "allow_unrecognized_csi": false 00:15:27.960 } 00:15:27.960 } 00:15:27.960 Got JSON-RPC error response 00:15:27.960 GoRPCClient: error on JSON-RPC call 00:15:27.960 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83272 00:15:27.960 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83272 ']' 00:15:27.960 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83272 00:15:27.960 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:27.960 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:27.960 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83272 00:15:27.960 killing process with pid 83272 00:15:27.960 Received shutdown signal, test time was about 10.000000 seconds 00:15:27.960 00:15:27.960 Latency(us) 00:15:27.960 [2024-11-19T17:14:20.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.960 [2024-11-19T17:14:20.493Z] =================================================================================================================== 00:15:27.960 [2024-11-19T17:14:20.493Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:27.960 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:27.960 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:27.960 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83272' 00:15:27.960 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83272 00:15:27.960 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83272 00:15:28.220 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:28.220 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:28.220 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:28.220 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:28.220 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:28.220 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.8jQW0zzZR3 00:15:28.220 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:28.220 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.8jQW0zzZR3 00:15:28.220 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:28.220 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:28.220 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:28.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:28.220 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:28.220 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.8jQW0zzZR3 00:15:28.220 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:28.220 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:15:28.220 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:28.220 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.8jQW0zzZR3 00:15:28.220 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:28.220 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83311 00:15:28.220 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:28.220 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83311 /var/tmp/bdevperf.sock 00:15:28.220 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:28.220 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83311 ']' 00:15:28.220 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:28.220 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:28.220 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:28.220 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:28.220 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.220 [2024-11-19 17:14:20.675714] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:15:28.220 [2024-11-19 17:14:20.675820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83311 ] 00:15:28.479 [2024-11-19 17:14:20.818129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.479 [2024-11-19 17:14:20.874195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:28.479 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:28.479 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:28.479 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8jQW0zzZR3 00:15:28.737 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:29.306 [2024-11-19 17:14:21.532318] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:29.306 [2024-11-19 17:14:21.540521] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:29.306 [2024-11-19 17:14:21.540577] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:29.306 [2024-11-19 17:14:21.540625] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:29.306 [2024-11-19 17:14:21.541328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119aac0 (107): Transport endpoint is not connected 00:15:29.306 [2024-11-19 17:14:21.542316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119aac0 (9): Bad file descriptor 00:15:29.306 [2024-11-19 17:14:21.543312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:15:29.306 [2024-11-19 17:14:21.543355] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:15:29.306 [2024-11-19 17:14:21.543365] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:15:29.306 [2024-11-19 17:14:21.543382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:15:29.306 2024/11/19 17:14:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:29.306 request: 00:15:29.306 { 00:15:29.306 "method": "bdev_nvme_attach_controller", 00:15:29.306 "params": { 00:15:29.306 "name": "TLSTEST", 00:15:29.306 "trtype": "tcp", 00:15:29.306 "traddr": "10.0.0.3", 00:15:29.306 "adrfam": "ipv4", 00:15:29.306 "trsvcid": "4420", 00:15:29.306 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:29.306 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:29.306 "prchk_reftag": false, 00:15:29.306 "prchk_guard": false, 00:15:29.306 "hdgst": false, 00:15:29.306 "ddgst": false, 00:15:29.306 "psk": "key0", 00:15:29.306 "allow_unrecognized_csi": false 00:15:29.306 } 00:15:29.306 } 00:15:29.306 Got JSON-RPC error response 00:15:29.306 GoRPCClient: error on JSON-RPC call 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83311 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83311 ']' 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83311 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83311 00:15:29.306 killing process with pid 83311 00:15:29.306 Received shutdown signal, test time was about 10.000000 seconds 00:15:29.306 00:15:29.306 Latency(us) 00:15:29.306 [2024-11-19T17:14:21.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.306 [2024-11-19T17:14:21.839Z] =================================================================================================================== 00:15:29.306 [2024-11-19T17:14:21.839Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83311' 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83311 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83311 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:29.306 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83350 00:15:29.307 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:29.307 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83350 /var/tmp/bdevperf.sock 00:15:29.307 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:29.307 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83350 ']' 00:15:29.307 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:29.307 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:29.307 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:29.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:29.307 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:29.307 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:29.565 [2024-11-19 17:14:21.867926] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:15:29.565 [2024-11-19 17:14:21.868052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83350 ] 00:15:29.565 [2024-11-19 17:14:22.023215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.565 [2024-11-19 17:14:22.096523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.502 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:30.502 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:30.502 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:15:30.760 [2024-11-19 17:14:23.155746] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:15:30.760 [2024-11-19 17:14:23.155811] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:30.760 2024/11/19 17:14:23 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:15:30.760 request: 00:15:30.760 { 00:15:30.760 "method": "keyring_file_add_key", 00:15:30.760 "params": { 00:15:30.760 "name": "key0", 00:15:30.760 "path": "" 00:15:30.760 } 00:15:30.760 } 00:15:30.760 Got JSON-RPC error response 00:15:30.760 GoRPCClient: error on JSON-RPC call 00:15:30.760 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:31.019 [2024-11-19 17:14:23.455967] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:31.019 [2024-11-19 17:14:23.456044] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:15:31.019 2024/11/19 17:14:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:15:31.019 request: 00:15:31.019 { 00:15:31.019 "method": "bdev_nvme_attach_controller", 00:15:31.019 "params": { 00:15:31.019 "name": "TLSTEST", 00:15:31.019 "trtype": "tcp", 00:15:31.019 "traddr": "10.0.0.3", 00:15:31.019 "adrfam": "ipv4", 00:15:31.019 "trsvcid": "4420", 00:15:31.019 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.019 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:31.019 "prchk_reftag": false, 00:15:31.019 "prchk_guard": false, 00:15:31.019 "hdgst": false, 00:15:31.019 "ddgst": false, 00:15:31.019 "psk": "key0", 00:15:31.019 "allow_unrecognized_csi": false 00:15:31.019 } 00:15:31.019 } 00:15:31.019 Got JSON-RPC error response 00:15:31.019 GoRPCClient: error on JSON-RPC call 00:15:31.019 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83350 00:15:31.019 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83350 ']' 00:15:31.019 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83350 00:15:31.019 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:31.019 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.019 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83350 00:15:31.019 killing process with pid 83350 00:15:31.019 Received shutdown signal, test time was about 10.000000 seconds 00:15:31.019 00:15:31.019 Latency(us) 00:15:31.019 [2024-11-19T17:14:23.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.019 [2024-11-19T17:14:23.552Z] =================================================================================================================== 00:15:31.020 [2024-11-19T17:14:23.553Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:31.020 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:31.020 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:31.020 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83350' 00:15:31.020 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83350 00:15:31.020 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83350 00:15:31.279 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:31.279 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:31.279 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:31.279 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:31.279 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:31.279 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 82696 00:15:31.279 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82696 ']' 00:15:31.279 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82696 00:15:31.279 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:31.279 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.279 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82696 00:15:31.279 killing process with pid 82696 00:15:31.279 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:31.279 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:31.279 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82696' 00:15:31.279 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82696 00:15:31.279 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82696 00:15:31.538 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:15:31.538 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:15:31.538 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:31.538 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:31.538 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:15:31.538 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:15:31.538 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:31.538 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:31.538 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:15:31.538 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.wVPZy7rVSS 00:15:31.538 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:31.538 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.wVPZy7rVSS 00:15:31.538 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:15:31.538 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:31.538 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:31.538 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:31.538 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83417 00:15:31.538 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:31.538 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83417 00:15:31.538 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83417 ']' 00:15:31.538 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.538 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:31.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.538 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.538 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:31.538 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:31.797 [2024-11-19 17:14:24.076830] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:15:31.797 [2024-11-19 17:14:24.076908] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.797 [2024-11-19 17:14:24.215508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.797 [2024-11-19 17:14:24.258951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.797 [2024-11-19 17:14:24.259027] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.797 [2024-11-19 17:14:24.259053] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:31.797 [2024-11-19 17:14:24.259061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:31.797 [2024-11-19 17:14:24.259068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.797 [2024-11-19 17:14:24.259439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.056 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:32.056 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:32.056 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:32.056 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:32.056 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:32.056 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.056 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.wVPZy7rVSS 00:15:32.056 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.wVPZy7rVSS 00:15:32.056 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:32.314 [2024-11-19 17:14:24.725945] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.314 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:32.573 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:32.831 [2024-11-19 17:14:25.282052] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:32.831 [2024-11-19 17:14:25.282350] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:32.831 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:33.091 malloc0 00:15:33.091 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:33.659 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.wVPZy7rVSS 00:15:33.659 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:33.917 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wVPZy7rVSS 00:15:33.917 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:33.917 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:33.917 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:33.917 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.wVPZy7rVSS 00:15:33.917 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:33.917 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83513 00:15:33.917 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:33.917 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83513 /var/tmp/bdevperf.sock 00:15:33.917 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:33.917 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83513 ']' 00:15:33.917 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:33.917 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.917 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:33.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:33.917 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.917 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.917 [2024-11-19 17:14:26.445383] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:15:33.917 [2024-11-19 17:14:26.445506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83513 ] 00:15:34.178 [2024-11-19 17:14:26.593088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.178 [2024-11-19 17:14:26.660138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:35.115 17:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:35.115 17:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:35.115 17:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wVPZy7rVSS 00:15:35.374 17:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:35.374 [2024-11-19 17:14:27.877423] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:35.633 TLSTESTn1 00:15:35.633 17:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:35.633 Running I/O for 10 seconds... 00:15:37.579 4096.00 IOPS, 16.00 MiB/s [2024-11-19T17:14:31.486Z] 4092.50 IOPS, 15.99 MiB/s [2024-11-19T17:14:32.421Z] 4148.00 IOPS, 16.20 MiB/s [2024-11-19T17:14:33.371Z] 4194.25 IOPS, 16.38 MiB/s [2024-11-19T17:14:34.307Z] 4166.00 IOPS, 16.27 MiB/s [2024-11-19T17:14:35.245Z] 4189.50 IOPS, 16.37 MiB/s [2024-11-19T17:14:36.182Z] 4219.29 IOPS, 16.48 MiB/s [2024-11-19T17:14:37.120Z] 4238.12 IOPS, 16.56 MiB/s [2024-11-19T17:14:38.500Z] 4264.33 IOPS, 16.66 MiB/s [2024-11-19T17:14:38.500Z] 4278.20 IOPS, 16.71 MiB/s 00:15:45.967 Latency(us) 00:15:45.968 [2024-11-19T17:14:38.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.968 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:45.968 Verification LBA range: start 0x0 length 0x2000 00:15:45.968 TLSTESTn1 : 10.01 4284.54 16.74 0.00 0.00 29823.04 4974.78 39083.29 00:15:45.968 [2024-11-19T17:14:38.501Z] =================================================================================================================== 00:15:45.968 [2024-11-19T17:14:38.501Z] Total : 4284.54 16.74 0.00 0.00 29823.04 4974.78 39083.29 00:15:45.968 { 00:15:45.968 "results": [ 00:15:45.968 { 00:15:45.968 "job": "TLSTESTn1", 00:15:45.968 "core_mask": "0x4", 00:15:45.968 "workload": "verify", 00:15:45.968 "status": "finished", 00:15:45.968 "verify_range": { 00:15:45.968 "start": 0, 00:15:45.968 "length": 8192 00:15:45.968 }, 00:15:45.968 "queue_depth": 128, 00:15:45.968 "io_size": 4096, 00:15:45.968 "runtime": 10.014148, 00:15:45.968 "iops": 4284.538235304691, 00:15:45.968 "mibps": 16.736477481658948, 00:15:45.968 "io_failed": 0, 00:15:45.968 "io_timeout": 0, 00:15:45.968 "avg_latency_us": 29823.038780929135, 00:15:45.968 "min_latency_us": 4974.778181818182, 00:15:45.968 "max_latency_us": 39083.28727272727 00:15:45.968 } 00:15:45.968 ], 00:15:45.968 "core_count": 1 00:15:45.968 } 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83513 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83513 ']' 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83513 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83513 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:45.968 killing process with pid 83513 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83513' 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83513 00:15:45.968 Received shutdown signal, test time was about 10.000000 seconds 00:15:45.968 00:15:45.968 Latency(us) 00:15:45.968 [2024-11-19T17:14:38.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.968 [2024-11-19T17:14:38.501Z] =================================================================================================================== 00:15:45.968 [2024-11-19T17:14:38.501Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83513 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.wVPZy7rVSS 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wVPZy7rVSS 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wVPZy7rVSS 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wVPZy7rVSS 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.wVPZy7rVSS 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83674 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83674 /var/tmp/bdevperf.sock 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83674 ']' 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:45.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:45.968 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:45.968 [2024-11-19 17:14:38.435825] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:15:45.968 [2024-11-19 17:14:38.436122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83674 ] 00:15:46.228 [2024-11-19 17:14:38.584452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.228 [2024-11-19 17:14:38.630328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.228 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:46.228 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:46.228 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wVPZy7rVSS 00:15:46.798 [2024-11-19 17:14:39.020498] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.wVPZy7rVSS': 0100666 00:15:46.798 [2024-11-19 17:14:39.020531] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:46.798 2024/11/19 17:14:39 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.wVPZy7rVSS], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:15:46.798 request: 00:15:46.798 { 00:15:46.798 "method": "keyring_file_add_key", 00:15:46.798 "params": { 00:15:46.798 "name": "key0", 00:15:46.798 "path": "/tmp/tmp.wVPZy7rVSS" 00:15:46.798 } 00:15:46.798 } 00:15:46.798 Got JSON-RPC error response 00:15:46.798 GoRPCClient: error on JSON-RPC call 00:15:46.798 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:46.798 [2024-11-19 17:14:39.296669] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:46.798 [2024-11-19 17:14:39.296797] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:15:46.798 2024/11/19 17:14:39 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:15:46.798 request: 00:15:46.798 { 00:15:46.798 "method": "bdev_nvme_attach_controller", 00:15:46.798 "params": { 00:15:46.798 "name": "TLSTEST", 00:15:46.798 "trtype": "tcp", 00:15:46.798 "traddr": "10.0.0.3", 00:15:46.798 "adrfam": "ipv4", 00:15:46.798 "trsvcid": "4420", 00:15:46.798 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:46.798 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:46.798 "prchk_reftag": false, 00:15:46.798 "prchk_guard": false, 00:15:46.798 "hdgst": false, 00:15:46.798 "ddgst": false, 00:15:46.798 "psk": "key0", 00:15:46.798 "allow_unrecognized_csi": false 00:15:46.798 } 00:15:46.798 } 00:15:46.798 Got JSON-RPC error response 00:15:46.798 GoRPCClient: error on JSON-RPC call 00:15:46.798 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83674 00:15:46.798 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83674 ']' 00:15:46.798 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83674 00:15:46.798 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:46.798 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:46.798 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83674 00:15:47.123 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:47.123 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:47.123 killing process with pid 83674 00:15:47.123 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83674' 00:15:47.123 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83674 00:15:47.123 Received shutdown signal, test time was about 10.000000 seconds 00:15:47.123 00:15:47.123 Latency(us) 00:15:47.123 [2024-11-19T17:14:39.656Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.123 [2024-11-19T17:14:39.656Z] =================================================================================================================== 00:15:47.123 [2024-11-19T17:14:39.656Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:47.123 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83674 00:15:47.123 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:47.123 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:47.123 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:47.123 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:47.123 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:47.123 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 83417 00:15:47.123 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83417 ']' 00:15:47.123 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83417 00:15:47.123 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:47.123 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:47.123 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83417 00:15:47.123 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:47.123 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:47.123 killing process with pid 83417 00:15:47.123 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83417' 00:15:47.123 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83417 00:15:47.123 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83417 00:15:47.402 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:15:47.402 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:47.402 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:47.402 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:47.402 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83718 00:15:47.402 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:47.402 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83718 00:15:47.402 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83718 ']' 00:15:47.402 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.402 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:47.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.402 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.402 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:47.402 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:47.402 [2024-11-19 17:14:39.833383] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:15:47.402 [2024-11-19 17:14:39.833470] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.661 [2024-11-19 17:14:39.976606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.661 [2024-11-19 17:14:40.022797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.661 [2024-11-19 17:14:40.022870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.661 [2024-11-19 17:14:40.022880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:47.661 [2024-11-19 17:14:40.022888] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:47.661 [2024-11-19 17:14:40.022894] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.661 [2024-11-19 17:14:40.023324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.661 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:47.661 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:47.661 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:47.661 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:47.661 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:47.919 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:47.919 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.wVPZy7rVSS 00:15:47.919 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:47.919 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.wVPZy7rVSS 00:15:47.919 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:15:47.919 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.919 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:15:47.919 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.919 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.wVPZy7rVSS 00:15:47.919 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.wVPZy7rVSS 00:15:47.919 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:48.177 [2024-11-19 17:14:40.477230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:48.178 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:48.436 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:48.694 [2024-11-19 17:14:41.013380] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:48.694 [2024-11-19 17:14:41.013652] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:48.694 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:48.953 malloc0 00:15:48.953 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:49.211 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.wVPZy7rVSS 00:15:49.469 [2024-11-19 17:14:41.800622] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.wVPZy7rVSS': 0100666 00:15:49.469 [2024-11-19 17:14:41.800680] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:49.469 2024/11/19 17:14:41 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.wVPZy7rVSS], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:15:49.469 request: 00:15:49.469 { 00:15:49.469 "method": "keyring_file_add_key", 00:15:49.469 "params": { 00:15:49.469 "name": "key0", 00:15:49.469 "path": "/tmp/tmp.wVPZy7rVSS" 00:15:49.470 } 00:15:49.470 } 00:15:49.470 Got JSON-RPC error response 00:15:49.470 GoRPCClient: error on JSON-RPC call 00:15:49.470 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:49.728 [2024-11-19 17:14:42.036654] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:15:49.728 [2024-11-19 17:14:42.036737] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:49.728 2024/11/19 17:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:key0], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:15:49.728 request: 00:15:49.728 { 00:15:49.728 "method": "nvmf_subsystem_add_host", 00:15:49.728 "params": { 00:15:49.728 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:49.728 "host": "nqn.2016-06.io.spdk:host1", 00:15:49.728 "psk": "key0" 00:15:49.728 } 00:15:49.728 } 00:15:49.728 Got JSON-RPC error response 00:15:49.728 GoRPCClient: error on JSON-RPC call 00:15:49.728 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:49.728 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:49.728 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:49.728 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:49.728 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 83718 00:15:49.728 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83718 ']' 00:15:49.728 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83718 00:15:49.728 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:49.728 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:49.728 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83718 00:15:49.728 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:49.728 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:49.728 killing process with pid 83718 00:15:49.728 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83718' 00:15:49.728 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83718 00:15:49.728 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83718 00:15:49.986 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.wVPZy7rVSS 00:15:49.986 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:15:49.986 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:49.986 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:49.986 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:49.986 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83828 00:15:49.986 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:49.986 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83828 00:15:49.986 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83828 ']' 00:15:49.986 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.986 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:49.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.986 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.986 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:49.986 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:49.986 [2024-11-19 17:14:42.348604] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:15:49.986 [2024-11-19 17:14:42.348710] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.986 [2024-11-19 17:14:42.486545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.245 [2024-11-19 17:14:42.533500] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.245 [2024-11-19 17:14:42.533574] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.245 [2024-11-19 17:14:42.533601] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.245 [2024-11-19 17:14:42.533609] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.245 [2024-11-19 17:14:42.533616] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.245 [2024-11-19 17:14:42.534056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.245 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:50.245 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:50.245 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:50.245 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:50.245 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:50.245 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:50.245 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.wVPZy7rVSS 00:15:50.245 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.wVPZy7rVSS 00:15:50.245 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:50.504 [2024-11-19 17:14:42.939560] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:50.504 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:50.762 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:51.020 [2024-11-19 17:14:43.459726] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:51.020 [2024-11-19 17:14:43.460051] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:51.021 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:51.279 malloc0 00:15:51.279 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:51.538 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.wVPZy7rVSS 00:15:51.797 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:52.055 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=83924 00:15:52.055 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:52.055 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:52.055 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 83924 /var/tmp/bdevperf.sock 00:15:52.055 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83924 ']' 00:15:52.055 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:52.056 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:52.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:52.056 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:52.056 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:52.056 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:52.315 [2024-11-19 17:14:44.599305] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:15:52.315 [2024-11-19 17:14:44.599411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83924 ] 00:15:52.315 [2024-11-19 17:14:44.747853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.315 [2024-11-19 17:14:44.817207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:52.573 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:52.573 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:52.573 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wVPZy7rVSS 00:15:52.832 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:53.091 [2024-11-19 17:14:45.460408] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:53.091 TLSTESTn1 00:15:53.091 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:53.658 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:15:53.658 "subsystems": [ 00:15:53.658 { 00:15:53.658 "subsystem": "keyring", 00:15:53.658 "config": [ 00:15:53.658 { 00:15:53.658 "method": "keyring_file_add_key", 00:15:53.658 "params": { 00:15:53.658 "name": "key0", 00:15:53.658 "path": "/tmp/tmp.wVPZy7rVSS" 00:15:53.658 } 00:15:53.658 } 00:15:53.658 ] 00:15:53.658 }, 00:15:53.658 { 00:15:53.658 "subsystem": "iobuf", 00:15:53.658 "config": [ 00:15:53.658 { 00:15:53.658 "method": "iobuf_set_options", 00:15:53.658 "params": { 00:15:53.658 "enable_numa": false, 00:15:53.658 "large_bufsize": 135168, 00:15:53.658 "large_pool_count": 1024, 00:15:53.658 "small_bufsize": 8192, 00:15:53.658 "small_pool_count": 8192 00:15:53.658 } 00:15:53.658 } 00:15:53.658 ] 00:15:53.658 }, 00:15:53.658 { 00:15:53.658 "subsystem": "sock", 00:15:53.658 "config": [ 00:15:53.658 { 00:15:53.658 "method": "sock_set_default_impl", 00:15:53.658 "params": { 00:15:53.658 "impl_name": "posix" 00:15:53.658 } 00:15:53.658 }, 00:15:53.658 { 00:15:53.658 "method": "sock_impl_set_options", 00:15:53.658 "params": { 00:15:53.658 "enable_ktls": false, 00:15:53.658 "enable_placement_id": 0, 00:15:53.658 "enable_quickack": false, 00:15:53.658 "enable_recv_pipe": true, 00:15:53.658 "enable_zerocopy_send_client": false, 00:15:53.658 "enable_zerocopy_send_server": true, 00:15:53.658 "impl_name": "ssl", 00:15:53.658 "recv_buf_size": 4096, 00:15:53.658 "send_buf_size": 4096, 00:15:53.658 "tls_version": 0, 00:15:53.658 "zerocopy_threshold": 0 00:15:53.658 } 00:15:53.659 }, 00:15:53.659 { 00:15:53.659 "method": "sock_impl_set_options", 00:15:53.659 "params": { 00:15:53.659 "enable_ktls": false, 00:15:53.659 "enable_placement_id": 0, 00:15:53.659 "enable_quickack": false, 00:15:53.659 "enable_recv_pipe": true, 00:15:53.659 "enable_zerocopy_send_client": false, 00:15:53.659 "enable_zerocopy_send_server": true, 00:15:53.659 "impl_name": "posix", 00:15:53.659 "recv_buf_size": 2097152, 00:15:53.659 "send_buf_size": 2097152, 00:15:53.659 "tls_version": 0, 00:15:53.659 "zerocopy_threshold": 0 00:15:53.659 } 00:15:53.659 } 00:15:53.659 ] 00:15:53.659 }, 00:15:53.659 { 00:15:53.659 "subsystem": "vmd", 00:15:53.659 "config": [] 00:15:53.659 }, 00:15:53.659 { 00:15:53.659 "subsystem": "accel", 00:15:53.659 "config": [ 00:15:53.659 { 00:15:53.659 "method": "accel_set_options", 00:15:53.659 "params": { 00:15:53.659 "buf_count": 2048, 00:15:53.659 "large_cache_size": 16, 00:15:53.659 "sequence_count": 2048, 00:15:53.659 "small_cache_size": 128, 00:15:53.659 "task_count": 2048 00:15:53.659 } 00:15:53.659 } 00:15:53.659 ] 00:15:53.659 }, 00:15:53.659 { 00:15:53.659 "subsystem": "bdev", 00:15:53.659 "config": [ 00:15:53.659 { 00:15:53.659 "method": "bdev_set_options", 00:15:53.659 "params": { 00:15:53.659 "bdev_auto_examine": true, 00:15:53.659 "bdev_io_cache_size": 256, 00:15:53.659 "bdev_io_pool_size": 65535, 00:15:53.659 "iobuf_large_cache_size": 16, 00:15:53.659 "iobuf_small_cache_size": 128 00:15:53.659 } 00:15:53.659 }, 00:15:53.659 { 00:15:53.659 "method": "bdev_raid_set_options", 00:15:53.659 "params": { 00:15:53.659 "process_max_bandwidth_mb_sec": 0, 00:15:53.659 "process_window_size_kb": 1024 00:15:53.659 } 00:15:53.659 }, 00:15:53.659 { 00:15:53.659 "method": "bdev_iscsi_set_options", 00:15:53.659 "params": { 00:15:53.659 "timeout_sec": 30 00:15:53.659 } 00:15:53.659 }, 00:15:53.659 { 00:15:53.659 "method": "bdev_nvme_set_options", 00:15:53.659 "params": { 00:15:53.659 "action_on_timeout": "none", 00:15:53.659 "allow_accel_sequence": false, 00:15:53.659 "arbitration_burst": 0, 00:15:53.659 "bdev_retry_count": 3, 00:15:53.659 "ctrlr_loss_timeout_sec": 0, 00:15:53.659 "delay_cmd_submit": true, 00:15:53.659 "dhchap_dhgroups": [ 00:15:53.659 "null", 00:15:53.659 "ffdhe2048", 00:15:53.659 "ffdhe3072", 00:15:53.659 "ffdhe4096", 00:15:53.659 "ffdhe6144", 00:15:53.659 "ffdhe8192" 00:15:53.659 ], 00:15:53.659 "dhchap_digests": [ 00:15:53.659 "sha256", 00:15:53.659 "sha384", 00:15:53.659 "sha512" 00:15:53.659 ], 00:15:53.659 "disable_auto_failback": false, 00:15:53.659 "fast_io_fail_timeout_sec": 0, 00:15:53.659 "generate_uuids": false, 00:15:53.659 "high_priority_weight": 0, 00:15:53.659 "io_path_stat": false, 00:15:53.659 "io_queue_requests": 0, 00:15:53.659 "keep_alive_timeout_ms": 10000, 00:15:53.659 "low_priority_weight": 0, 00:15:53.659 "medium_priority_weight": 0, 00:15:53.659 "nvme_adminq_poll_period_us": 10000, 00:15:53.659 "nvme_error_stat": false, 00:15:53.659 "nvme_ioq_poll_period_us": 0, 00:15:53.659 "rdma_cm_event_timeout_ms": 0, 00:15:53.659 "rdma_max_cq_size": 0, 00:15:53.659 "rdma_srq_size": 0, 00:15:53.659 "reconnect_delay_sec": 0, 00:15:53.659 "timeout_admin_us": 0, 00:15:53.659 "timeout_us": 0, 00:15:53.659 "transport_ack_timeout": 0, 00:15:53.659 "transport_retry_count": 4, 00:15:53.659 "transport_tos": 0 00:15:53.659 } 00:15:53.659 }, 00:15:53.659 { 00:15:53.659 "method": "bdev_nvme_set_hotplug", 00:15:53.659 "params": { 00:15:53.659 "enable": false, 00:15:53.659 "period_us": 100000 00:15:53.659 } 00:15:53.659 }, 00:15:53.659 { 00:15:53.659 "method": "bdev_malloc_create", 00:15:53.659 "params": { 00:15:53.659 "block_size": 4096, 00:15:53.659 "dif_is_head_of_md": false, 00:15:53.659 "dif_pi_format": 0, 00:15:53.659 "dif_type": 0, 00:15:53.659 "md_size": 0, 00:15:53.659 "name": "malloc0", 00:15:53.659 "num_blocks": 8192, 00:15:53.659 "optimal_io_boundary": 0, 00:15:53.659 "physical_block_size": 4096, 00:15:53.659 "uuid": "33c5b086-2e68-40db-9ae9-e092ebf887d9" 00:15:53.659 } 00:15:53.659 }, 00:15:53.659 { 00:15:53.659 "method": "bdev_wait_for_examine" 00:15:53.659 } 00:15:53.659 ] 00:15:53.659 }, 00:15:53.659 { 00:15:53.659 "subsystem": "nbd", 00:15:53.659 "config": [] 00:15:53.659 }, 00:15:53.659 { 00:15:53.659 "subsystem": "scheduler", 00:15:53.659 "config": [ 00:15:53.659 { 00:15:53.659 "method": "framework_set_scheduler", 00:15:53.659 "params": { 00:15:53.659 "name": "static" 00:15:53.659 } 00:15:53.659 } 00:15:53.659 ] 00:15:53.659 }, 00:15:53.659 { 00:15:53.659 "subsystem": "nvmf", 00:15:53.659 "config": [ 00:15:53.659 { 00:15:53.659 "method": "nvmf_set_config", 00:15:53.659 "params": { 00:15:53.659 "admin_cmd_passthru": { 00:15:53.659 "identify_ctrlr": false 00:15:53.659 }, 00:15:53.659 "dhchap_dhgroups": [ 00:15:53.659 "null", 00:15:53.659 "ffdhe2048", 00:15:53.659 "ffdhe3072", 00:15:53.659 "ffdhe4096", 00:15:53.659 "ffdhe6144", 00:15:53.659 "ffdhe8192" 00:15:53.659 ], 00:15:53.659 "dhchap_digests": [ 00:15:53.659 "sha256", 00:15:53.659 "sha384", 00:15:53.659 "sha512" 00:15:53.659 ], 00:15:53.659 "discovery_filter": "match_any" 00:15:53.659 } 00:15:53.659 }, 00:15:53.659 { 00:15:53.659 "method": "nvmf_set_max_subsystems", 00:15:53.659 "params": { 00:15:53.659 "max_subsystems": 1024 00:15:53.659 } 00:15:53.659 }, 00:15:53.659 { 00:15:53.659 "method": "nvmf_set_crdt", 00:15:53.659 "params": { 00:15:53.659 "crdt1": 0, 00:15:53.659 "crdt2": 0, 00:15:53.659 "crdt3": 0 00:15:53.659 } 00:15:53.659 }, 00:15:53.659 { 00:15:53.659 "method": "nvmf_create_transport", 00:15:53.659 "params": { 00:15:53.659 "abort_timeout_sec": 1, 00:15:53.659 "ack_timeout": 0, 00:15:53.659 "buf_cache_size": 4294967295, 00:15:53.659 "c2h_success": false, 00:15:53.659 "data_wr_pool_size": 0, 00:15:53.659 "dif_insert_or_strip": false, 00:15:53.659 "in_capsule_data_size": 4096, 00:15:53.659 "io_unit_size": 131072, 00:15:53.659 "max_aq_depth": 128, 00:15:53.659 "max_io_qpairs_per_ctrlr": 127, 00:15:53.659 "max_io_size": 131072, 00:15:53.659 "max_queue_depth": 128, 00:15:53.659 "num_shared_buffers": 511, 00:15:53.659 "sock_priority": 0, 00:15:53.659 "trtype": "TCP", 00:15:53.659 "zcopy": false 00:15:53.659 } 00:15:53.659 }, 00:15:53.659 { 00:15:53.659 "method": "nvmf_create_subsystem", 00:15:53.659 "params": { 00:15:53.659 "allow_any_host": false, 00:15:53.659 "ana_reporting": false, 00:15:53.659 "max_cntlid": 65519, 00:15:53.659 "max_namespaces": 10, 00:15:53.659 "min_cntlid": 1, 00:15:53.659 "model_number": "SPDK bdev Controller", 00:15:53.659 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:53.659 "serial_number": "SPDK00000000000001" 00:15:53.659 } 00:15:53.659 }, 00:15:53.659 { 00:15:53.659 "method": "nvmf_subsystem_add_host", 00:15:53.659 "params": { 00:15:53.659 "host": "nqn.2016-06.io.spdk:host1", 00:15:53.659 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:53.659 "psk": "key0" 00:15:53.659 } 00:15:53.659 }, 00:15:53.660 { 00:15:53.660 "method": "nvmf_subsystem_add_ns", 00:15:53.660 "params": { 00:15:53.660 "namespace": { 00:15:53.660 "bdev_name": "malloc0", 00:15:53.660 "nguid": "33C5B0862E6840DB9AE9E092EBF887D9", 00:15:53.660 "no_auto_visible": false, 00:15:53.660 "nsid": 1, 00:15:53.660 "uuid": "33c5b086-2e68-40db-9ae9-e092ebf887d9" 00:15:53.660 }, 00:15:53.660 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:53.660 } 00:15:53.660 }, 00:15:53.660 { 00:15:53.660 "method": "nvmf_subsystem_add_listener", 00:15:53.660 "params": { 00:15:53.660 "listen_address": { 00:15:53.660 "adrfam": "IPv4", 00:15:53.660 "traddr": "10.0.0.3", 00:15:53.660 "trsvcid": "4420", 00:15:53.660 "trtype": "TCP" 00:15:53.660 }, 00:15:53.660 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:53.660 "secure_channel": true 00:15:53.660 } 00:15:53.660 } 00:15:53.660 ] 00:15:53.660 } 00:15:53.660 ] 00:15:53.660 }' 00:15:53.660 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:53.918 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:15:53.918 "subsystems": [ 00:15:53.918 { 00:15:53.918 "subsystem": "keyring", 00:15:53.918 "config": [ 00:15:53.918 { 00:15:53.918 "method": "keyring_file_add_key", 00:15:53.918 "params": { 00:15:53.918 "name": "key0", 00:15:53.918 "path": "/tmp/tmp.wVPZy7rVSS" 00:15:53.918 } 00:15:53.918 } 00:15:53.918 ] 00:15:53.918 }, 00:15:53.918 { 00:15:53.918 "subsystem": "iobuf", 00:15:53.918 "config": [ 00:15:53.918 { 00:15:53.918 "method": "iobuf_set_options", 00:15:53.918 "params": { 00:15:53.918 "enable_numa": false, 00:15:53.918 "large_bufsize": 135168, 00:15:53.918 "large_pool_count": 1024, 00:15:53.918 "small_bufsize": 8192, 00:15:53.918 "small_pool_count": 8192 00:15:53.918 } 00:15:53.918 } 00:15:53.918 ] 00:15:53.918 }, 00:15:53.918 { 00:15:53.918 "subsystem": "sock", 00:15:53.918 "config": [ 00:15:53.918 { 00:15:53.918 "method": "sock_set_default_impl", 00:15:53.918 "params": { 00:15:53.918 "impl_name": "posix" 00:15:53.918 } 00:15:53.918 }, 00:15:53.918 { 00:15:53.918 "method": "sock_impl_set_options", 00:15:53.918 "params": { 00:15:53.918 "enable_ktls": false, 00:15:53.918 "enable_placement_id": 0, 00:15:53.918 "enable_quickack": false, 00:15:53.918 "enable_recv_pipe": true, 00:15:53.918 "enable_zerocopy_send_client": false, 00:15:53.918 "enable_zerocopy_send_server": true, 00:15:53.918 "impl_name": "ssl", 00:15:53.918 "recv_buf_size": 4096, 00:15:53.918 "send_buf_size": 4096, 00:15:53.918 "tls_version": 0, 00:15:53.918 "zerocopy_threshold": 0 00:15:53.918 } 00:15:53.918 }, 00:15:53.918 { 00:15:53.918 "method": "sock_impl_set_options", 00:15:53.918 "params": { 00:15:53.918 "enable_ktls": false, 00:15:53.918 "enable_placement_id": 0, 00:15:53.918 "enable_quickack": false, 00:15:53.918 "enable_recv_pipe": true, 00:15:53.918 "enable_zerocopy_send_client": false, 00:15:53.918 "enable_zerocopy_send_server": true, 00:15:53.918 "impl_name": "posix", 00:15:53.918 "recv_buf_size": 2097152, 00:15:53.918 "send_buf_size": 2097152, 00:15:53.918 "tls_version": 0, 00:15:53.918 "zerocopy_threshold": 0 00:15:53.918 } 00:15:53.918 } 00:15:53.918 ] 00:15:53.918 }, 00:15:53.918 { 00:15:53.918 "subsystem": "vmd", 00:15:53.918 "config": [] 00:15:53.918 }, 00:15:53.918 { 00:15:53.918 "subsystem": "accel", 00:15:53.918 "config": [ 00:15:53.918 { 00:15:53.918 "method": "accel_set_options", 00:15:53.918 "params": { 00:15:53.918 "buf_count": 2048, 00:15:53.918 "large_cache_size": 16, 00:15:53.918 "sequence_count": 2048, 00:15:53.918 "small_cache_size": 128, 00:15:53.918 "task_count": 2048 00:15:53.918 } 00:15:53.918 } 00:15:53.918 ] 00:15:53.918 }, 00:15:53.918 { 00:15:53.918 "subsystem": "bdev", 00:15:53.918 "config": [ 00:15:53.918 { 00:15:53.918 "method": "bdev_set_options", 00:15:53.918 "params": { 00:15:53.918 "bdev_auto_examine": true, 00:15:53.918 "bdev_io_cache_size": 256, 00:15:53.918 "bdev_io_pool_size": 65535, 00:15:53.918 "iobuf_large_cache_size": 16, 00:15:53.918 "iobuf_small_cache_size": 128 00:15:53.918 } 00:15:53.918 }, 00:15:53.918 { 00:15:53.918 "method": "bdev_raid_set_options", 00:15:53.918 "params": { 00:15:53.918 "process_max_bandwidth_mb_sec": 0, 00:15:53.918 "process_window_size_kb": 1024 00:15:53.918 } 00:15:53.918 }, 00:15:53.918 { 00:15:53.918 "method": "bdev_iscsi_set_options", 00:15:53.918 "params": { 00:15:53.918 "timeout_sec": 30 00:15:53.918 } 00:15:53.918 }, 00:15:53.918 { 00:15:53.918 "method": "bdev_nvme_set_options", 00:15:53.918 "params": { 00:15:53.918 "action_on_timeout": "none", 00:15:53.918 "allow_accel_sequence": false, 00:15:53.918 "arbitration_burst": 0, 00:15:53.919 "bdev_retry_count": 3, 00:15:53.919 "ctrlr_loss_timeout_sec": 0, 00:15:53.919 "delay_cmd_submit": true, 00:15:53.919 "dhchap_dhgroups": [ 00:15:53.919 "null", 00:15:53.919 "ffdhe2048", 00:15:53.919 "ffdhe3072", 00:15:53.919 "ffdhe4096", 00:15:53.919 "ffdhe6144", 00:15:53.919 "ffdhe8192" 00:15:53.919 ], 00:15:53.919 "dhchap_digests": [ 00:15:53.919 "sha256", 00:15:53.919 "sha384", 00:15:53.919 "sha512" 00:15:53.919 ], 00:15:53.919 "disable_auto_failback": false, 00:15:53.919 "fast_io_fail_timeout_sec": 0, 00:15:53.919 "generate_uuids": false, 00:15:53.919 "high_priority_weight": 0, 00:15:53.919 "io_path_stat": false, 00:15:53.919 "io_queue_requests": 512, 00:15:53.919 "keep_alive_timeout_ms": 10000, 00:15:53.919 "low_priority_weight": 0, 00:15:53.919 "medium_priority_weight": 0, 00:15:53.919 "nvme_adminq_poll_period_us": 10000, 00:15:53.919 "nvme_error_stat": false, 00:15:53.919 "nvme_ioq_poll_period_us": 0, 00:15:53.919 "rdma_cm_event_timeout_ms": 0, 00:15:53.919 "rdma_max_cq_size": 0, 00:15:53.919 "rdma_srq_size": 0, 00:15:53.919 "reconnect_delay_sec": 0, 00:15:53.919 "timeout_admin_us": 0, 00:15:53.919 "timeout_us": 0, 00:15:53.919 "transport_ack_timeout": 0, 00:15:53.919 "transport_retry_count": 4, 00:15:53.919 "transport_tos": 0 00:15:53.919 } 00:15:53.919 }, 00:15:53.919 { 00:15:53.919 "method": "bdev_nvme_attach_controller", 00:15:53.919 "params": { 00:15:53.919 "adrfam": "IPv4", 00:15:53.919 "ctrlr_loss_timeout_sec": 0, 00:15:53.919 "ddgst": false, 00:15:53.919 "fast_io_fail_timeout_sec": 0, 00:15:53.919 "hdgst": false, 00:15:53.919 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:53.919 "multipath": "multipath", 00:15:53.919 "name": "TLSTEST", 00:15:53.919 "prchk_guard": false, 00:15:53.919 "prchk_reftag": false, 00:15:53.919 "psk": "key0", 00:15:53.919 "reconnect_delay_sec": 0, 00:15:53.919 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:53.919 "traddr": "10.0.0.3", 00:15:53.919 "trsvcid": "4420", 00:15:53.919 "trtype": "TCP" 00:15:53.919 } 00:15:53.919 }, 00:15:53.919 { 00:15:53.919 "method": "bdev_nvme_set_hotplug", 00:15:53.919 "params": { 00:15:53.919 "enable": false, 00:15:53.919 "period_us": 100000 00:15:53.919 } 00:15:53.919 }, 00:15:53.919 { 00:15:53.919 "method": "bdev_wait_for_examine" 00:15:53.919 } 00:15:53.919 ] 00:15:53.919 }, 00:15:53.919 { 00:15:53.919 "subsystem": "nbd", 00:15:53.919 "config": [] 00:15:53.919 } 00:15:53.919 ] 00:15:53.919 }' 00:15:53.919 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 83924 00:15:53.919 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83924 ']' 00:15:53.919 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83924 00:15:53.919 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:53.919 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:53.919 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83924 00:15:53.919 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:53.919 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:53.919 killing process with pid 83924 00:15:53.919 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83924' 00:15:53.919 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83924 00:15:53.919 Received shutdown signal, test time was about 10.000000 seconds 00:15:53.919 00:15:53.919 Latency(us) 00:15:53.919 [2024-11-19T17:14:46.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.919 [2024-11-19T17:14:46.452Z] =================================================================================================================== 00:15:53.919 [2024-11-19T17:14:46.452Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:53.919 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83924 00:15:54.178 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 83828 00:15:54.178 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83828 ']' 00:15:54.178 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83828 00:15:54.178 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:54.178 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:54.178 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83828 00:15:54.178 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:54.178 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:54.178 killing process with pid 83828 00:15:54.178 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83828' 00:15:54.178 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83828 00:15:54.178 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83828 00:15:54.437 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:54.437 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:54.437 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:54.437 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:15:54.437 "subsystems": [ 00:15:54.437 { 00:15:54.437 "subsystem": "keyring", 00:15:54.437 "config": [ 00:15:54.437 { 00:15:54.437 "method": "keyring_file_add_key", 00:15:54.437 "params": { 00:15:54.437 "name": "key0", 00:15:54.437 "path": "/tmp/tmp.wVPZy7rVSS" 00:15:54.437 } 00:15:54.437 } 00:15:54.437 ] 00:15:54.437 }, 00:15:54.437 { 00:15:54.437 "subsystem": "iobuf", 00:15:54.437 "config": [ 00:15:54.437 { 00:15:54.437 "method": "iobuf_set_options", 00:15:54.437 "params": { 00:15:54.437 "enable_numa": false, 00:15:54.437 "large_bufsize": 135168, 00:15:54.437 "large_pool_count": 1024, 00:15:54.437 "small_bufsize": 8192, 00:15:54.437 "small_pool_count": 8192 00:15:54.437 } 00:15:54.437 } 00:15:54.437 ] 00:15:54.437 }, 00:15:54.437 { 00:15:54.437 "subsystem": "sock", 00:15:54.437 "config": [ 00:15:54.437 { 00:15:54.437 "method": "sock_set_default_impl", 00:15:54.437 "params": { 00:15:54.437 "impl_name": "posix" 00:15:54.437 } 00:15:54.437 }, 00:15:54.437 { 00:15:54.437 "method": "sock_impl_set_options", 00:15:54.437 "params": { 00:15:54.437 "enable_ktls": false, 00:15:54.437 "enable_placement_id": 0, 00:15:54.437 "enable_quickack": false, 00:15:54.437 "enable_recv_pipe": true, 00:15:54.437 "enable_zerocopy_send_client": false, 00:15:54.437 "enable_zerocopy_send_server": true, 00:15:54.437 "impl_name": "ssl", 00:15:54.437 "recv_buf_size": 4096, 00:15:54.437 "send_buf_size": 4096, 00:15:54.437 "tls_version": 0, 00:15:54.437 "zerocopy_threshold": 0 00:15:54.437 } 00:15:54.438 }, 00:15:54.438 { 00:15:54.438 "method": "sock_impl_set_options", 00:15:54.438 "params": { 00:15:54.438 "enable_ktls": false, 00:15:54.438 "enable_placement_id": 0, 00:15:54.438 "enable_quickack": false, 00:15:54.438 "enable_recv_pipe": true, 00:15:54.438 "enable_zerocopy_send_client": false, 00:15:54.438 "enable_zerocopy_send_server": true, 00:15:54.438 "impl_name": "posix", 00:15:54.438 "recv_buf_size": 2097152, 00:15:54.438 "send_buf_size": 2097152, 00:15:54.438 "tls_version": 0, 00:15:54.438 "zerocopy_threshold": 0 00:15:54.438 } 00:15:54.438 } 00:15:54.438 ] 00:15:54.438 }, 00:15:54.438 { 00:15:54.438 "subsystem": "vmd", 00:15:54.438 "config": [] 00:15:54.438 }, 00:15:54.438 { 00:15:54.438 "subsystem": "accel", 00:15:54.438 "config": [ 00:15:54.438 { 00:15:54.438 "method": "accel_set_options", 00:15:54.438 "params": { 00:15:54.438 "buf_count": 2048, 00:15:54.438 "large_cache_size": 16, 00:15:54.438 "sequence_count": 2048, 00:15:54.438 "small_cache_size": 128, 00:15:54.438 "task_count": 2048 00:15:54.438 } 00:15:54.438 } 00:15:54.438 ] 00:15:54.438 }, 00:15:54.438 { 00:15:54.438 "subsystem": "bdev", 00:15:54.438 "config": [ 00:15:54.438 { 00:15:54.438 "method": "bdev_set_options", 00:15:54.438 "params": { 00:15:54.438 "bdev_auto_examine": true, 00:15:54.438 "bdev_io_cache_size": 256, 00:15:54.438 "bdev_io_pool_size": 65535, 00:15:54.438 "iobuf_large_cache_size": 16, 00:15:54.438 "iobuf_small_cache_size": 128 00:15:54.438 } 00:15:54.438 }, 00:15:54.438 { 00:15:54.438 "method": "bdev_raid_set_options", 00:15:54.438 "params": { 00:15:54.438 "process_max_bandwidth_mb_sec": 0, 00:15:54.438 "process_window_size_kb": 1024 00:15:54.438 } 00:15:54.438 }, 00:15:54.438 { 00:15:54.438 "method": "bdev_iscsi_set_options", 00:15:54.438 "params": { 00:15:54.438 "timeout_sec": 30 00:15:54.438 } 00:15:54.438 }, 00:15:54.438 { 00:15:54.438 "method": "bdev_nvme_set_options", 00:15:54.438 "params": { 00:15:54.438 "action_on_timeout": "none", 00:15:54.438 "allow_accel_sequence": false, 00:15:54.438 "arbitration_burst": 0, 00:15:54.438 "bdev_retry_count": 3, 00:15:54.438 "ctrlr_loss_timeout_sec": 0, 00:15:54.438 "delay_cmd_submit": true, 00:15:54.438 "dhchap_dhgroups": [ 00:15:54.438 "null", 00:15:54.438 "ffdhe2048", 00:15:54.438 "ffdhe3072", 00:15:54.438 "ffdhe4096", 00:15:54.438 "ffdhe6144", 00:15:54.438 "ffdhe8192" 00:15:54.438 ], 00:15:54.438 "dhchap_digests": [ 00:15:54.438 "sha256", 00:15:54.438 "sha384", 00:15:54.438 "sha512" 00:15:54.438 ], 00:15:54.438 "disable_auto_failback": false, 00:15:54.438 "fast_io_fail_timeout_sec": 0, 00:15:54.438 "generate_uuids": false, 00:15:54.438 "high_priority_weight": 0, 00:15:54.438 "io_path_stat": false, 00:15:54.438 "io_queue_requests": 0, 00:15:54.438 "keep_alive_timeout_ms": 10000, 00:15:54.438 "low_priority_weight": 0, 00:15:54.438 "medium_priority_weight": 0, 00:15:54.438 "nvme_adminq_poll_period_us": 10000, 00:15:54.438 "nvme_error_stat": false, 00:15:54.438 "nvme_ioq_poll_period_us": 0, 00:15:54.438 "rdma_cm_event_timeout_ms": 0, 00:15:54.438 "rdma_max_cq_size": 0, 00:15:54.438 "rdma_srq_size": 0, 00:15:54.438 "reconnect_delay_sec": 0, 00:15:54.438 "timeout_admin_us": 0, 00:15:54.438 "timeout_us": 0, 00:15:54.438 "transport_ack_timeout": 0, 00:15:54.438 "transport_retry_count": 4, 00:15:54.438 "transport_tos": 0 00:15:54.438 } 00:15:54.438 }, 00:15:54.438 { 00:15:54.438 "method": "bdev_nvme_set_hotplug", 00:15:54.438 "params": { 00:15:54.438 "enable": false, 00:15:54.438 "period_us": 100000 00:15:54.438 } 00:15:54.438 }, 00:15:54.438 { 00:15:54.438 "method": "bdev_malloc_create", 00:15:54.438 "params": { 00:15:54.438 "block_size": 4096, 00:15:54.438 "dif_is_head_of_md": false, 00:15:54.438 "dif_pi_format": 0, 00:15:54.438 "dif_type": 0, 00:15:54.438 "md_size": 0, 00:15:54.438 "name": "malloc0", 00:15:54.438 "num_blocks": 8192, 00:15:54.438 "optimal_io_boundary": 0, 00:15:54.438 "physical_block_size": 4096, 00:15:54.438 "uuid": "33c5b086-2e68-40db-9ae9-e092ebf887d9" 00:15:54.438 } 00:15:54.438 }, 00:15:54.438 { 00:15:54.438 "method": "bdev_wait_for_examine" 00:15:54.438 } 00:15:54.438 ] 00:15:54.438 }, 00:15:54.438 { 00:15:54.438 "subsystem": "nbd", 00:15:54.438 "config": [] 00:15:54.438 }, 00:15:54.438 { 00:15:54.438 "subsystem": "scheduler", 00:15:54.438 "config": [ 00:15:54.438 { 00:15:54.438 "method": "framework_set_scheduler", 00:15:54.438 "params": { 00:15:54.438 "name": "static" 00:15:54.438 } 00:15:54.438 } 00:15:54.438 ] 00:15:54.438 }, 00:15:54.438 { 00:15:54.438 "subsystem": "nvmf", 00:15:54.438 "config": [ 00:15:54.438 { 00:15:54.438 "method": "nvmf_set_config", 00:15:54.438 "params": { 00:15:54.438 "admin_cmd_passthru": { 00:15:54.438 "identify_ctrlr": false 00:15:54.438 }, 00:15:54.438 "dhchap_dhgroups": [ 00:15:54.438 "null", 00:15:54.438 "ffdhe2048", 00:15:54.438 "ffdhe3072", 00:15:54.438 "ffdhe4096", 00:15:54.438 "ffdhe6144", 00:15:54.438 "ffdhe8192" 00:15:54.438 ], 00:15:54.438 "dhchap_digests": [ 00:15:54.438 "sha256", 00:15:54.438 "sha384", 00:15:54.438 "sha512" 00:15:54.438 ], 00:15:54.438 "discovery_filter": "match_any" 00:15:54.438 } 00:15:54.438 }, 00:15:54.438 { 00:15:54.438 "method": "nvmf_set_max_subsystems", 00:15:54.438 "params": { 00:15:54.438 "max_subsystems": 1024 00:15:54.438 } 00:15:54.438 }, 00:15:54.438 { 00:15:54.438 "method": "nvmf_set_crdt", 00:15:54.438 "params": { 00:15:54.438 "crdt1": 0, 00:15:54.438 "crdt2": 0, 00:15:54.438 "crdt3": 0 00:15:54.438 } 00:15:54.438 }, 00:15:54.438 { 00:15:54.438 "method": "nvmf_create_transport", 00:15:54.438 "params": { 00:15:54.438 "abort_timeout_sec": 1, 00:15:54.438 "ack_timeout": 0, 00:15:54.438 "buf_cache_size": 4294967295, 00:15:54.438 "c2h_success": false, 00:15:54.438 "data_wr_pool_size": 0, 00:15:54.438 "dif_insert_or_strip": false, 00:15:54.438 "in_capsule_data_size": 4096, 00:15:54.438 "io_unit_size": 131072, 00:15:54.438 "max_aq_depth": 128, 00:15:54.438 "max_io_qpairs_per_ctrlr": 127, 00:15:54.438 "max_io_size": 131072, 00:15:54.439 "max_queue_depth": 128, 00:15:54.439 "num_shared_buffers": 511, 00:15:54.439 "sock_priority": 0, 00:15:54.439 "trtype": "TCP", 00:15:54.439 "zcopy": false 00:15:54.439 } 00:15:54.439 }, 00:15:54.439 { 00:15:54.439 "method": "nvmf_create_subsystem", 00:15:54.439 "params": { 00:15:54.439 "allow_any_host": false, 00:15:54.439 "ana_reporting": false, 00:15:54.439 "max_cntlid": 65519, 00:15:54.439 "max_namespaces": 10, 00:15:54.439 "min_cntlid": 1, 00:15:54.439 "model_number": "SPDK bdev Controller", 00:15:54.439 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:54.439 "serial_number": "SPDK00000000000001" 00:15:54.439 } 00:15:54.439 }, 00:15:54.439 { 00:15:54.439 "method": "nvmf_subsystem_add_host", 00:15:54.439 "params": { 00:15:54.439 "host": "nqn.2016-06.io.spdk:host1", 00:15:54.439 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:54.439 "psk": "key0" 00:15:54.439 } 00:15:54.439 }, 00:15:54.439 { 00:15:54.439 "method": "nvmf_subsystem_add_ns", 00:15:54.439 "params": { 00:15:54.439 "namespace": { 00:15:54.439 "bdev_name": "malloc0", 00:15:54.439 "nguid": "33C5B0862E6840DB9AE9E092EBF887D9", 00:15:54.439 "no_auto_visible": false, 00:15:54.439 "nsid": 1, 00:15:54.439 "uuid": "33c5b086-2e68-40db-9ae9-e092ebf887d9" 00:15:54.439 }, 00:15:54.439 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:54.439 } 00:15:54.439 }, 00:15:54.439 { 00:15:54.439 "method": "nvmf_subsystem_add_listener", 00:15:54.439 "params": { 00:15:54.439 "listen_address": { 00:15:54.439 "adrfam": "IPv4", 00:15:54.439 "traddr": "10.0.0.3", 00:15:54.439 "trsvcid": "4420", 00:15:54.439 "trtype": "TCP" 00:15:54.439 }, 00:15:54.439 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:54.439 "secure_channel": true 00:15:54.439 } 00:15:54.439 } 00:15:54.439 ] 00:15:54.439 } 00:15:54.439 ] 00:15:54.439 }' 00:15:54.439 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:54.439 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83996 00:15:54.439 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83996 00:15:54.439 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:54.439 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83996 ']' 00:15:54.439 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.439 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:54.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.439 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.439 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:54.439 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:54.439 [2024-11-19 17:14:46.925731] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:15:54.439 [2024-11-19 17:14:46.925883] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.698 [2024-11-19 17:14:47.073196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.698 [2024-11-19 17:14:47.133309] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:54.698 [2024-11-19 17:14:47.133369] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:54.698 [2024-11-19 17:14:47.133379] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:54.698 [2024-11-19 17:14:47.133386] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:54.698 [2024-11-19 17:14:47.133392] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:54.698 [2024-11-19 17:14:47.133889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.957 [2024-11-19 17:14:47.371607] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:54.957 [2024-11-19 17:14:47.403497] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:54.957 [2024-11-19 17:14:47.403737] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:55.524 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.524 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:55.524 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:55.524 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:55.524 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:55.524 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:55.524 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=84040 00:15:55.524 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 84040 /var/tmp/bdevperf.sock 00:15:55.524 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84040 ']' 00:15:55.524 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:55.525 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:55.525 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.525 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:55.525 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:15:55.525 "subsystems": [ 00:15:55.525 { 00:15:55.525 "subsystem": "keyring", 00:15:55.525 "config": [ 00:15:55.525 { 00:15:55.525 "method": "keyring_file_add_key", 00:15:55.525 "params": { 00:15:55.525 "name": "key0", 00:15:55.525 "path": "/tmp/tmp.wVPZy7rVSS" 00:15:55.525 } 00:15:55.525 } 00:15:55.525 ] 00:15:55.525 }, 00:15:55.525 { 00:15:55.525 "subsystem": "iobuf", 00:15:55.525 "config": [ 00:15:55.525 { 00:15:55.525 "method": "iobuf_set_options", 00:15:55.525 "params": { 00:15:55.525 "enable_numa": false, 00:15:55.525 "large_bufsize": 135168, 00:15:55.525 "large_pool_count": 1024, 00:15:55.525 "small_bufsize": 8192, 00:15:55.525 "small_pool_count": 8192 00:15:55.525 } 00:15:55.525 } 00:15:55.525 ] 00:15:55.525 }, 00:15:55.525 { 00:15:55.525 "subsystem": "sock", 00:15:55.525 "config": [ 00:15:55.525 { 00:15:55.525 "method": "sock_set_default_impl", 00:15:55.525 "params": { 00:15:55.525 "impl_name": "posix" 00:15:55.525 } 00:15:55.525 }, 00:15:55.525 { 00:15:55.525 "method": "sock_impl_set_options", 00:15:55.525 "params": { 00:15:55.525 "enable_ktls": false, 00:15:55.525 "enable_placement_id": 0, 00:15:55.525 "enable_quickack": false, 00:15:55.525 "enable_recv_pipe": true, 00:15:55.525 "enable_zerocopy_send_client": false, 00:15:55.525 "enable_zerocopy_send_server": true, 00:15:55.525 "impl_name": "ssl", 00:15:55.525 "recv_buf_size": 4096, 00:15:55.525 "send_buf_size": 4096, 00:15:55.525 "tls_version": 0, 00:15:55.525 "zerocopy_threshold": 0 00:15:55.525 } 00:15:55.525 }, 00:15:55.525 { 00:15:55.525 "method": "sock_impl_set_options", 00:15:55.525 "params": { 00:15:55.525 "enable_ktls": false, 00:15:55.525 "enable_placement_id": 0, 00:15:55.525 "enable_quickack": false, 00:15:55.525 "enable_recv_pipe": true, 00:15:55.525 "enable_zerocopy_send_client": false, 00:15:55.525 "enable_zerocopy_send_server": true, 00:15:55.525 "impl_name": "posix", 00:15:55.525 "recv_buf_size": 2097152, 00:15:55.525 "send_buf_size": 2097152, 00:15:55.525 "tls_version": 0, 00:15:55.525 "zerocopy_threshold": 0 00:15:55.525 } 00:15:55.525 } 00:15:55.525 ] 00:15:55.525 }, 00:15:55.525 { 00:15:55.525 "subsystem": "vmd", 00:15:55.525 "config": [] 00:15:55.525 }, 00:15:55.525 { 00:15:55.525 "subsystem": "accel", 00:15:55.525 "config": [ 00:15:55.525 { 00:15:55.525 "method": "accel_set_options", 00:15:55.525 "params": { 00:15:55.525 "buf_count": 2048, 00:15:55.525 "large_cache_size": 16, 00:15:55.525 "sequence_count": 2048, 00:15:55.525 "small_cache_size": 128, 00:15:55.525 "task_count": 2048 00:15:55.525 } 00:15:55.525 } 00:15:55.525 ] 00:15:55.525 }, 00:15:55.525 { 00:15:55.525 "subsystem": "bdev", 00:15:55.525 "config": [ 00:15:55.525 { 00:15:55.525 "method": "bdev_set_options", 00:15:55.525 "params": { 00:15:55.525 "bdev_auto_examine": true, 00:15:55.525 "bdev_io_cache_size": 256, 00:15:55.525 "bdev_io_pool_size": 65535, 00:15:55.525 "iobuf_large_cache_size": 16, 00:15:55.525 "iobuf_small_cache_size": 128 00:15:55.525 } 00:15:55.525 }, 00:15:55.525 { 00:15:55.525 "method": "bdev_raid_set_options", 00:15:55.525 "params": { 00:15:55.525 "process_max_bandwidth_mb_sec": 0, 00:15:55.525 "process_window_size_kb": 1024 00:15:55.525 } 00:15:55.525 }, 00:15:55.525 { 00:15:55.525 "method": "bdev_iscsi_set_options", 00:15:55.525 "params": { 00:15:55.525 "timeout_sec": 30 00:15:55.525 } 00:15:55.525 }, 00:15:55.525 { 00:15:55.525 "method": "bdev_nvme_set_options", 00:15:55.525 "params": { 00:15:55.525 "action_on_timeout": "none", 00:15:55.525 "allow_accel_sequence": false, 00:15:55.525 "arbitration_burst": 0, 00:15:55.525 "bdev_retry_count": 3, 00:15:55.525 "ctrlr_loss_timeout_sec": 0, 00:15:55.525 "delay_cmd_submit": true, 00:15:55.525 "dhchap_dhgroups": [ 00:15:55.525 "null", 00:15:55.525 "ffdhe2048", 00:15:55.525 "ffdhe3072", 00:15:55.525 "ffdhe4096", 00:15:55.525 "ffdhe6144", 00:15:55.525 "ffdhe8192" 00:15:55.525 ], 00:15:55.525 "dhchap_digests": [ 00:15:55.525 "sha256", 00:15:55.525 "sha384", 00:15:55.525 "sha512" 00:15:55.525 ], 00:15:55.525 "disable_auto_failback": false, 00:15:55.525 "fast_io_fail_timeout_sec": 0, 00:15:55.525 "generate_uuids": false, 00:15:55.525 "high_priority_weight": 0, 00:15:55.525 "io_path_stat": false, 00:15:55.525 "io_queue_requests": 512, 00:15:55.525 "keep_alive_timeout_ms": 10000, 00:15:55.525 "low_priority_weight": 0, 00:15:55.525 "medium_priority_weight": 0, 00:15:55.525 "nvme_adminq_poll_period_us": 10000, 00:15:55.525 "nvme_error_stat": false, 00:15:55.525 "nvme_ioq_poll_period_us": 0, 00:15:55.525 "rdma_cm_event_timeout_ms": 0, 00:15:55.525 "rdma_max_cq_size": 0, 00:15:55.525 "rdma_srq_size": 0, 00:15:55.525 "reconnect_delay_sec": 0, 00:15:55.525 "timeout_admin_us": 0, 00:15:55.525 "timeout_us": 0, 00:15:55.525 "transport_ack_timeout": 0, 00:15:55.525 "transport_retry_count": 4, 00:15:55.525 "transport_tos": 0 00:15:55.525 } 00:15:55.525 }, 00:15:55.525 { 00:15:55.525 "method": "bdev_nvme_attach_controller", 00:15:55.525 "params": { 00:15:55.525 "adrfam": "IPv4", 00:15:55.525 "ctrlr_loss_timeout_sec": 0, 00:15:55.525 "ddgst": false, 00:15:55.525 "fast_io_fail_timeout_sec": 0, 00:15:55.525 "hdgst": false, 00:15:55.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:55.525 "multipath": "multipath", 00:15:55.525 "name": "TLSTEST", 00:15:55.525 "prchk_guard": false, 00:15:55.525 "prchk_reftag": false, 00:15:55.525 "psk": "key0", 00:15:55.525 "reconnect_delay_sec": 0, 00:15:55.525 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:55.525 "traddr": "10.0.0.3", 00:15:55.525 "trsvcid": "4420", 00:15:55.525 "trtype": "TCP" 00:15:55.525 } 00:15:55.525 }, 00:15:55.525 { 00:15:55.525 "method": "bdev_nvme_set_hotplug", 00:15:55.525 "params": { 00:15:55.525 "enable": false, 00:15:55.525 "period_us": 100000 00:15:55.525 } 00:15:55.525 }, 00:15:55.525 { 00:15:55.525 "method": "bdev_wait_for_examine" 00:15:55.525 } 00:15:55.525 ] 00:15:55.525 }, 00:15:55.525 { 00:15:55.525 "subsystem": "nbd", 00:15:55.525 "config": [] 00:15:55.525 } 00:15:55.525 ] 00:15:55.525 }' 00:15:55.525 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.525 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:55.785 [2024-11-19 17:14:48.096907] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:15:55.785 [2024-11-19 17:14:48.097039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84040 ] 00:15:55.785 [2024-11-19 17:14:48.249622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.785 [2024-11-19 17:14:48.305697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:56.043 [2024-11-19 17:14:48.488140] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:56.611 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:56.611 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:56.611 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:56.871 Running I/O for 10 seconds... 00:15:58.743 4285.00 IOPS, 16.74 MiB/s [2024-11-19T17:14:52.249Z] 4224.00 IOPS, 16.50 MiB/s [2024-11-19T17:14:53.628Z] 4096.00 IOPS, 16.00 MiB/s [2024-11-19T17:14:54.565Z] 4112.00 IOPS, 16.06 MiB/s [2024-11-19T17:14:55.501Z] 4083.20 IOPS, 15.95 MiB/s [2024-11-19T17:14:56.437Z] 4096.00 IOPS, 16.00 MiB/s [2024-11-19T17:14:57.373Z] 4114.71 IOPS, 16.07 MiB/s [2024-11-19T17:14:58.309Z] 4146.12 IOPS, 16.20 MiB/s [2024-11-19T17:14:59.245Z] 4128.89 IOPS, 16.13 MiB/s [2024-11-19T17:14:59.503Z] 4082.40 IOPS, 15.95 MiB/s 00:16:06.970 Latency(us) 00:16:06.970 [2024-11-19T17:14:59.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.970 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:06.970 Verification LBA range: start 0x0 length 0x2000 00:16:06.970 TLSTESTn1 : 10.02 4086.68 15.96 0.00 0.00 31259.82 7864.32 24188.74 00:16:06.970 [2024-11-19T17:14:59.503Z] =================================================================================================================== 00:16:06.970 [2024-11-19T17:14:59.503Z] Total : 4086.68 15.96 0.00 0.00 31259.82 7864.32 24188.74 00:16:06.970 { 00:16:06.970 "results": [ 00:16:06.970 { 00:16:06.970 "job": "TLSTESTn1", 00:16:06.970 "core_mask": "0x4", 00:16:06.970 "workload": "verify", 00:16:06.970 "status": "finished", 00:16:06.970 "verify_range": { 00:16:06.970 "start": 0, 00:16:06.970 "length": 8192 00:16:06.970 }, 00:16:06.970 "queue_depth": 128, 00:16:06.970 "io_size": 4096, 00:16:06.970 "runtime": 10.019873, 00:16:06.970 "iops": 4086.6785437300455, 00:16:06.970 "mibps": 15.96358806144549, 00:16:06.970 "io_failed": 0, 00:16:06.970 "io_timeout": 0, 00:16:06.970 "avg_latency_us": 31259.8170586198, 00:16:06.970 "min_latency_us": 7864.32, 00:16:06.970 "max_latency_us": 24188.741818181818 00:16:06.970 } 00:16:06.970 ], 00:16:06.970 "core_count": 1 00:16:06.970 } 00:16:06.970 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:06.970 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 84040 00:16:06.970 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84040 ']' 00:16:06.970 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84040 00:16:06.970 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:06.970 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:06.970 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84040 00:16:06.970 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:06.970 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:06.970 killing process with pid 84040 00:16:06.970 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84040' 00:16:06.970 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84040 00:16:06.970 Received shutdown signal, test time was about 10.000000 seconds 00:16:06.970 00:16:06.970 Latency(us) 00:16:06.970 [2024-11-19T17:14:59.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.970 [2024-11-19T17:14:59.503Z] =================================================================================================================== 00:16:06.970 [2024-11-19T17:14:59.503Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:06.970 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84040 00:16:07.229 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 83996 00:16:07.229 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83996 ']' 00:16:07.229 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83996 00:16:07.229 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:07.229 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:07.229 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83996 00:16:07.229 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:07.229 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:07.229 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83996' 00:16:07.229 killing process with pid 83996 00:16:07.229 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83996 00:16:07.229 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83996 00:16:07.487 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:16:07.487 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:07.487 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:07.487 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:07.487 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84196 00:16:07.487 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:07.487 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84196 00:16:07.487 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84196 ']' 00:16:07.487 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.487 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:07.487 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.487 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:07.487 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:07.487 [2024-11-19 17:14:59.863997] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:16:07.487 [2024-11-19 17:14:59.864104] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.746 [2024-11-19 17:15:00.019637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.746 [2024-11-19 17:15:00.102604] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.746 [2024-11-19 17:15:00.102673] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.746 [2024-11-19 17:15:00.102687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:07.746 [2024-11-19 17:15:00.102699] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:07.746 [2024-11-19 17:15:00.102708] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.746 [2024-11-19 17:15:00.103217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.680 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:08.680 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:08.680 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:08.680 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:08.680 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:08.680 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:08.680 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.wVPZy7rVSS 00:16:08.680 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.wVPZy7rVSS 00:16:08.680 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:08.940 [2024-11-19 17:15:01.243682] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:08.940 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:09.199 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:09.458 [2024-11-19 17:15:01.831830] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:09.458 [2024-11-19 17:15:01.832161] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:09.458 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:09.716 malloc0 00:16:09.716 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:09.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.wVPZy7rVSS 00:16:10.238 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:10.502 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=84311 00:16:10.502 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:10.502 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:10.502 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 84311 /var/tmp/bdevperf.sock 00:16:10.502 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84311 ']' 00:16:10.502 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:10.502 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:10.502 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:10.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:10.503 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:10.503 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:10.503 [2024-11-19 17:15:02.925686] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:16:10.503 [2024-11-19 17:15:02.925821] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84311 ] 00:16:10.762 [2024-11-19 17:15:03.069707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.762 [2024-11-19 17:15:03.135884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.699 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:11.699 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:11.699 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wVPZy7rVSS 00:16:11.959 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:12.219 [2024-11-19 17:15:04.557827] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:12.219 nvme0n1 00:16:12.219 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:12.478 Running I/O for 1 seconds... 00:16:13.414 3870.00 IOPS, 15.12 MiB/s 00:16:13.414 Latency(us) 00:16:13.414 [2024-11-19T17:15:05.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.414 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:13.414 Verification LBA range: start 0x0 length 0x2000 00:16:13.414 nvme0n1 : 1.01 3945.40 15.41 0.00 0.00 32172.60 4796.04 30504.03 00:16:13.414 [2024-11-19T17:15:05.947Z] =================================================================================================================== 00:16:13.414 [2024-11-19T17:15:05.947Z] Total : 3945.40 15.41 0.00 0.00 32172.60 4796.04 30504.03 00:16:13.414 { 00:16:13.414 "results": [ 00:16:13.414 { 00:16:13.414 "job": "nvme0n1", 00:16:13.414 "core_mask": "0x2", 00:16:13.414 "workload": "verify", 00:16:13.414 "status": "finished", 00:16:13.414 "verify_range": { 00:16:13.414 "start": 0, 00:16:13.414 "length": 8192 00:16:13.414 }, 00:16:13.414 "queue_depth": 128, 00:16:13.414 "io_size": 4096, 00:16:13.414 "runtime": 1.013332, 00:16:13.414 "iops": 3945.3999281578003, 00:16:13.414 "mibps": 15.411718469366408, 00:16:13.414 "io_failed": 0, 00:16:13.414 "io_timeout": 0, 00:16:13.414 "avg_latency_us": 32172.59818272773, 00:16:13.414 "min_latency_us": 4796.043636363636, 00:16:13.414 "max_latency_us": 30504.02909090909 00:16:13.414 } 00:16:13.414 ], 00:16:13.414 "core_count": 1 00:16:13.414 } 00:16:13.414 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 84311 00:16:13.414 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84311 ']' 00:16:13.414 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84311 00:16:13.414 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:13.414 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.414 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84311 00:16:13.414 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:13.414 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:13.414 killing process with pid 84311 00:16:13.414 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84311' 00:16:13.414 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84311 00:16:13.414 Received shutdown signal, test time was about 1.000000 seconds 00:16:13.414 00:16:13.414 Latency(us) 00:16:13.414 [2024-11-19T17:15:05.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.414 [2024-11-19T17:15:05.947Z] =================================================================================================================== 00:16:13.414 [2024-11-19T17:15:05.947Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:13.414 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84311 00:16:13.673 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 84196 00:16:13.673 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84196 ']' 00:16:13.673 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84196 00:16:13.673 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:13.673 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.673 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84196 00:16:13.673 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:13.673 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:13.673 killing process with pid 84196 00:16:13.673 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84196' 00:16:13.673 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84196 00:16:13.673 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84196 00:16:13.932 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:16:13.932 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:13.932 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:13.932 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:13.932 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84389 00:16:13.932 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84389 00:16:13.932 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:13.932 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84389 ']' 00:16:13.932 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.932 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:13.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.932 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.932 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:13.932 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:13.932 [2024-11-19 17:15:06.460520] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:16:13.932 [2024-11-19 17:15:06.460626] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.191 [2024-11-19 17:15:06.603066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.191 [2024-11-19 17:15:06.679575] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.191 [2024-11-19 17:15:06.679652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.191 [2024-11-19 17:15:06.679662] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:14.191 [2024-11-19 17:15:06.679674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:14.191 [2024-11-19 17:15:06.679680] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.191 [2024-11-19 17:15:06.680189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.451 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:14.451 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:14.451 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:14.451 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:14.451 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:14.451 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.451 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:16:14.451 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.451 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:14.451 [2024-11-19 17:15:06.896977] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.451 malloc0 00:16:14.451 [2024-11-19 17:15:06.932180] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:14.451 [2024-11-19 17:15:06.932514] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:14.451 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.451 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=84426 00:16:14.451 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:14.451 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 84426 /var/tmp/bdevperf.sock 00:16:14.451 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84426 ']' 00:16:14.451 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:14.451 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:14.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:14.451 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:14.451 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:14.451 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:14.710 [2024-11-19 17:15:07.014732] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:16:14.710 [2024-11-19 17:15:07.014861] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84426 ] 00:16:14.710 [2024-11-19 17:15:07.163645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.710 [2024-11-19 17:15:07.234460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.969 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:14.969 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:14.969 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wVPZy7rVSS 00:16:15.227 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:15.485 [2024-11-19 17:15:07.887318] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:15.485 nvme0n1 00:16:15.485 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:15.743 Running I/O for 1 seconds... 00:16:16.678 3821.00 IOPS, 14.93 MiB/s 00:16:16.678 Latency(us) 00:16:16.678 [2024-11-19T17:15:09.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.678 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:16.678 Verification LBA range: start 0x0 length 0x2000 00:16:16.678 nvme0n1 : 1.02 3887.82 15.19 0.00 0.00 32636.91 4855.62 25380.31 00:16:16.678 [2024-11-19T17:15:09.211Z] =================================================================================================================== 00:16:16.678 [2024-11-19T17:15:09.211Z] Total : 3887.82 15.19 0.00 0.00 32636.91 4855.62 25380.31 00:16:16.678 { 00:16:16.678 "results": [ 00:16:16.678 { 00:16:16.678 "job": "nvme0n1", 00:16:16.678 "core_mask": "0x2", 00:16:16.678 "workload": "verify", 00:16:16.678 "status": "finished", 00:16:16.678 "verify_range": { 00:16:16.678 "start": 0, 00:16:16.678 "length": 8192 00:16:16.678 }, 00:16:16.678 "queue_depth": 128, 00:16:16.678 "io_size": 4096, 00:16:16.678 "runtime": 1.015994, 00:16:16.678 "iops": 3887.8182351470577, 00:16:16.678 "mibps": 15.186789981043194, 00:16:16.678 "io_failed": 0, 00:16:16.678 "io_timeout": 0, 00:16:16.678 "avg_latency_us": 32636.912916915993, 00:16:16.678 "min_latency_us": 4855.6218181818185, 00:16:16.678 "max_latency_us": 25380.305454545454 00:16:16.678 } 00:16:16.678 ], 00:16:16.678 "core_count": 1 00:16:16.678 } 00:16:16.678 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:16:16.678 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.678 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:16.936 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.937 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:16:16.937 "subsystems": [ 00:16:16.937 { 00:16:16.937 "subsystem": "keyring", 00:16:16.937 "config": [ 00:16:16.937 { 00:16:16.937 "method": "keyring_file_add_key", 00:16:16.937 "params": { 00:16:16.937 "name": "key0", 00:16:16.937 "path": "/tmp/tmp.wVPZy7rVSS" 00:16:16.937 } 00:16:16.937 } 00:16:16.937 ] 00:16:16.937 }, 00:16:16.937 { 00:16:16.937 "subsystem": "iobuf", 00:16:16.937 "config": [ 00:16:16.937 { 00:16:16.937 "method": "iobuf_set_options", 00:16:16.937 "params": { 00:16:16.937 "enable_numa": false, 00:16:16.937 "large_bufsize": 135168, 00:16:16.937 "large_pool_count": 1024, 00:16:16.937 "small_bufsize": 8192, 00:16:16.937 "small_pool_count": 8192 00:16:16.937 } 00:16:16.937 } 00:16:16.937 ] 00:16:16.937 }, 00:16:16.937 { 00:16:16.937 "subsystem": "sock", 00:16:16.937 "config": [ 00:16:16.937 { 00:16:16.937 "method": "sock_set_default_impl", 00:16:16.937 "params": { 00:16:16.937 "impl_name": "posix" 00:16:16.937 } 00:16:16.937 }, 00:16:16.937 { 00:16:16.937 "method": "sock_impl_set_options", 00:16:16.937 "params": { 00:16:16.937 "enable_ktls": false, 00:16:16.937 "enable_placement_id": 0, 00:16:16.937 "enable_quickack": false, 00:16:16.937 "enable_recv_pipe": true, 00:16:16.937 "enable_zerocopy_send_client": false, 00:16:16.937 "enable_zerocopy_send_server": true, 00:16:16.937 "impl_name": "ssl", 00:16:16.937 "recv_buf_size": 4096, 00:16:16.937 "send_buf_size": 4096, 00:16:16.937 "tls_version": 0, 00:16:16.937 "zerocopy_threshold": 0 00:16:16.937 } 00:16:16.937 }, 00:16:16.937 { 00:16:16.937 "method": "sock_impl_set_options", 00:16:16.937 "params": { 00:16:16.937 "enable_ktls": false, 00:16:16.937 "enable_placement_id": 0, 00:16:16.937 "enable_quickack": false, 00:16:16.937 "enable_recv_pipe": true, 00:16:16.937 "enable_zerocopy_send_client": false, 00:16:16.937 "enable_zerocopy_send_server": true, 00:16:16.937 "impl_name": "posix", 00:16:16.937 "recv_buf_size": 2097152, 00:16:16.937 "send_buf_size": 2097152, 00:16:16.937 "tls_version": 0, 00:16:16.937 "zerocopy_threshold": 0 00:16:16.937 } 00:16:16.937 } 00:16:16.937 ] 00:16:16.937 }, 00:16:16.937 { 00:16:16.937 "subsystem": "vmd", 00:16:16.937 "config": [] 00:16:16.937 }, 00:16:16.937 { 00:16:16.937 "subsystem": "accel", 00:16:16.937 "config": [ 00:16:16.937 { 00:16:16.937 "method": "accel_set_options", 00:16:16.937 "params": { 00:16:16.937 "buf_count": 2048, 00:16:16.937 "large_cache_size": 16, 00:16:16.937 "sequence_count": 2048, 00:16:16.937 "small_cache_size": 128, 00:16:16.937 "task_count": 2048 00:16:16.937 } 00:16:16.937 } 00:16:16.937 ] 00:16:16.937 }, 00:16:16.937 { 00:16:16.937 "subsystem": "bdev", 00:16:16.937 "config": [ 00:16:16.937 { 00:16:16.937 "method": "bdev_set_options", 00:16:16.937 "params": { 00:16:16.937 "bdev_auto_examine": true, 00:16:16.937 "bdev_io_cache_size": 256, 00:16:16.937 "bdev_io_pool_size": 65535, 00:16:16.937 "iobuf_large_cache_size": 16, 00:16:16.937 "iobuf_small_cache_size": 128 00:16:16.937 } 00:16:16.937 }, 00:16:16.937 { 00:16:16.937 "method": "bdev_raid_set_options", 00:16:16.937 "params": { 00:16:16.937 "process_max_bandwidth_mb_sec": 0, 00:16:16.937 "process_window_size_kb": 1024 00:16:16.937 } 00:16:16.937 }, 00:16:16.937 { 00:16:16.937 "method": "bdev_iscsi_set_options", 00:16:16.937 "params": { 00:16:16.937 "timeout_sec": 30 00:16:16.937 } 00:16:16.937 }, 00:16:16.937 { 00:16:16.937 "method": "bdev_nvme_set_options", 00:16:16.937 "params": { 00:16:16.937 "action_on_timeout": "none", 00:16:16.937 "allow_accel_sequence": false, 00:16:16.937 "arbitration_burst": 0, 00:16:16.937 "bdev_retry_count": 3, 00:16:16.937 "ctrlr_loss_timeout_sec": 0, 00:16:16.937 "delay_cmd_submit": true, 00:16:16.937 "dhchap_dhgroups": [ 00:16:16.937 "null", 00:16:16.937 "ffdhe2048", 00:16:16.937 "ffdhe3072", 00:16:16.937 "ffdhe4096", 00:16:16.937 "ffdhe6144", 00:16:16.937 "ffdhe8192" 00:16:16.937 ], 00:16:16.937 "dhchap_digests": [ 00:16:16.937 "sha256", 00:16:16.937 "sha384", 00:16:16.937 "sha512" 00:16:16.937 ], 00:16:16.937 "disable_auto_failback": false, 00:16:16.937 "fast_io_fail_timeout_sec": 0, 00:16:16.937 "generate_uuids": false, 00:16:16.937 "high_priority_weight": 0, 00:16:16.937 "io_path_stat": false, 00:16:16.937 "io_queue_requests": 0, 00:16:16.937 "keep_alive_timeout_ms": 10000, 00:16:16.937 "low_priority_weight": 0, 00:16:16.937 "medium_priority_weight": 0, 00:16:16.937 "nvme_adminq_poll_period_us": 10000, 00:16:16.937 "nvme_error_stat": false, 00:16:16.937 "nvme_ioq_poll_period_us": 0, 00:16:16.937 "rdma_cm_event_timeout_ms": 0, 00:16:16.937 "rdma_max_cq_size": 0, 00:16:16.937 "rdma_srq_size": 0, 00:16:16.937 "reconnect_delay_sec": 0, 00:16:16.937 "timeout_admin_us": 0, 00:16:16.937 "timeout_us": 0, 00:16:16.937 "transport_ack_timeout": 0, 00:16:16.937 "transport_retry_count": 4, 00:16:16.937 "transport_tos": 0 00:16:16.937 } 00:16:16.937 }, 00:16:16.937 { 00:16:16.937 "method": "bdev_nvme_set_hotplug", 00:16:16.937 "params": { 00:16:16.937 "enable": false, 00:16:16.937 "period_us": 100000 00:16:16.937 } 00:16:16.937 }, 00:16:16.937 { 00:16:16.937 "method": "bdev_malloc_create", 00:16:16.937 "params": { 00:16:16.937 "block_size": 4096, 00:16:16.937 "dif_is_head_of_md": false, 00:16:16.937 "dif_pi_format": 0, 00:16:16.937 "dif_type": 0, 00:16:16.937 "md_size": 0, 00:16:16.937 "name": "malloc0", 00:16:16.937 "num_blocks": 8192, 00:16:16.937 "optimal_io_boundary": 0, 00:16:16.937 "physical_block_size": 4096, 00:16:16.937 "uuid": "0fc9b956-4cf1-4680-9264-ee97e138c306" 00:16:16.937 } 00:16:16.937 }, 00:16:16.937 { 00:16:16.937 "method": "bdev_wait_for_examine" 00:16:16.937 } 00:16:16.937 ] 00:16:16.937 }, 00:16:16.937 { 00:16:16.937 "subsystem": "nbd", 00:16:16.937 "config": [] 00:16:16.937 }, 00:16:16.937 { 00:16:16.937 "subsystem": "scheduler", 00:16:16.937 "config": [ 00:16:16.937 { 00:16:16.937 "method": "framework_set_scheduler", 00:16:16.937 "params": { 00:16:16.937 "name": "static" 00:16:16.937 } 00:16:16.937 } 00:16:16.937 ] 00:16:16.937 }, 00:16:16.937 { 00:16:16.937 "subsystem": "nvmf", 00:16:16.937 "config": [ 00:16:16.937 { 00:16:16.937 "method": "nvmf_set_config", 00:16:16.937 "params": { 00:16:16.937 "admin_cmd_passthru": { 00:16:16.937 "identify_ctrlr": false 00:16:16.937 }, 00:16:16.937 "dhchap_dhgroups": [ 00:16:16.937 "null", 00:16:16.937 "ffdhe2048", 00:16:16.937 "ffdhe3072", 00:16:16.937 "ffdhe4096", 00:16:16.937 "ffdhe6144", 00:16:16.937 "ffdhe8192" 00:16:16.937 ], 00:16:16.937 "dhchap_digests": [ 00:16:16.937 "sha256", 00:16:16.937 "sha384", 00:16:16.937 "sha512" 00:16:16.937 ], 00:16:16.937 "discovery_filter": "match_any" 00:16:16.937 } 00:16:16.937 }, 00:16:16.937 { 00:16:16.937 "method": "nvmf_set_max_subsystems", 00:16:16.937 "params": { 00:16:16.937 "max_subsystems": 1024 00:16:16.937 } 00:16:16.938 }, 00:16:16.938 { 00:16:16.938 "method": "nvmf_set_crdt", 00:16:16.938 "params": { 00:16:16.938 "crdt1": 0, 00:16:16.938 "crdt2": 0, 00:16:16.938 "crdt3": 0 00:16:16.938 } 00:16:16.938 }, 00:16:16.938 { 00:16:16.938 "method": "nvmf_create_transport", 00:16:16.938 "params": { 00:16:16.938 "abort_timeout_sec": 1, 00:16:16.938 "ack_timeout": 0, 00:16:16.938 "buf_cache_size": 4294967295, 00:16:16.938 "c2h_success": false, 00:16:16.938 "data_wr_pool_size": 0, 00:16:16.938 "dif_insert_or_strip": false, 00:16:16.938 "in_capsule_data_size": 4096, 00:16:16.938 "io_unit_size": 131072, 00:16:16.938 "max_aq_depth": 128, 00:16:16.938 "max_io_qpairs_per_ctrlr": 127, 00:16:16.938 "max_io_size": 131072, 00:16:16.938 "max_queue_depth": 128, 00:16:16.938 "num_shared_buffers": 511, 00:16:16.938 "sock_priority": 0, 00:16:16.938 "trtype": "TCP", 00:16:16.938 "zcopy": false 00:16:16.938 } 00:16:16.938 }, 00:16:16.938 { 00:16:16.938 "method": "nvmf_create_subsystem", 00:16:16.938 "params": { 00:16:16.938 "allow_any_host": false, 00:16:16.938 "ana_reporting": false, 00:16:16.938 "max_cntlid": 65519, 00:16:16.938 "max_namespaces": 32, 00:16:16.938 "min_cntlid": 1, 00:16:16.938 "model_number": "SPDK bdev Controller", 00:16:16.938 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:16.938 "serial_number": "00000000000000000000" 00:16:16.938 } 00:16:16.938 }, 00:16:16.938 { 00:16:16.938 "method": "nvmf_subsystem_add_host", 00:16:16.938 "params": { 00:16:16.938 "host": "nqn.2016-06.io.spdk:host1", 00:16:16.938 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:16.938 "psk": "key0" 00:16:16.938 } 00:16:16.938 }, 00:16:16.938 { 00:16:16.938 "method": "nvmf_subsystem_add_ns", 00:16:16.938 "params": { 00:16:16.938 "namespace": { 00:16:16.938 "bdev_name": "malloc0", 00:16:16.938 "nguid": "0FC9B9564CF146809264EE97E138C306", 00:16:16.938 "no_auto_visible": false, 00:16:16.938 "nsid": 1, 00:16:16.938 "uuid": "0fc9b956-4cf1-4680-9264-ee97e138c306" 00:16:16.938 }, 00:16:16.938 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:16.938 } 00:16:16.938 }, 00:16:16.938 { 00:16:16.938 "method": "nvmf_subsystem_add_listener", 00:16:16.938 "params": { 00:16:16.938 "listen_address": { 00:16:16.938 "adrfam": "IPv4", 00:16:16.938 "traddr": "10.0.0.3", 00:16:16.938 "trsvcid": "4420", 00:16:16.938 "trtype": "TCP" 00:16:16.938 }, 00:16:16.938 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:16.938 "secure_channel": false, 00:16:16.938 "sock_impl": "ssl" 00:16:16.938 } 00:16:16.938 } 00:16:16.938 ] 00:16:16.938 } 00:16:16.938 ] 00:16:16.938 }' 00:16:16.938 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:17.197 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:16:17.197 "subsystems": [ 00:16:17.197 { 00:16:17.197 "subsystem": "keyring", 00:16:17.197 "config": [ 00:16:17.197 { 00:16:17.197 "method": "keyring_file_add_key", 00:16:17.197 "params": { 00:16:17.197 "name": "key0", 00:16:17.197 "path": "/tmp/tmp.wVPZy7rVSS" 00:16:17.197 } 00:16:17.197 } 00:16:17.197 ] 00:16:17.197 }, 00:16:17.197 { 00:16:17.197 "subsystem": "iobuf", 00:16:17.197 "config": [ 00:16:17.197 { 00:16:17.197 "method": "iobuf_set_options", 00:16:17.197 "params": { 00:16:17.197 "enable_numa": false, 00:16:17.197 "large_bufsize": 135168, 00:16:17.197 "large_pool_count": 1024, 00:16:17.197 "small_bufsize": 8192, 00:16:17.197 "small_pool_count": 8192 00:16:17.197 } 00:16:17.197 } 00:16:17.197 ] 00:16:17.197 }, 00:16:17.197 { 00:16:17.197 "subsystem": "sock", 00:16:17.197 "config": [ 00:16:17.197 { 00:16:17.197 "method": "sock_set_default_impl", 00:16:17.197 "params": { 00:16:17.197 "impl_name": "posix" 00:16:17.197 } 00:16:17.197 }, 00:16:17.197 { 00:16:17.197 "method": "sock_impl_set_options", 00:16:17.197 "params": { 00:16:17.197 "enable_ktls": false, 00:16:17.197 "enable_placement_id": 0, 00:16:17.197 "enable_quickack": false, 00:16:17.197 "enable_recv_pipe": true, 00:16:17.197 "enable_zerocopy_send_client": false, 00:16:17.197 "enable_zerocopy_send_server": true, 00:16:17.197 "impl_name": "ssl", 00:16:17.197 "recv_buf_size": 4096, 00:16:17.197 "send_buf_size": 4096, 00:16:17.197 "tls_version": 0, 00:16:17.197 "zerocopy_threshold": 0 00:16:17.197 } 00:16:17.197 }, 00:16:17.197 { 00:16:17.197 "method": "sock_impl_set_options", 00:16:17.197 "params": { 00:16:17.197 "enable_ktls": false, 00:16:17.197 "enable_placement_id": 0, 00:16:17.197 "enable_quickack": false, 00:16:17.197 "enable_recv_pipe": true, 00:16:17.197 "enable_zerocopy_send_client": false, 00:16:17.197 "enable_zerocopy_send_server": true, 00:16:17.197 "impl_name": "posix", 00:16:17.197 "recv_buf_size": 2097152, 00:16:17.197 "send_buf_size": 2097152, 00:16:17.197 "tls_version": 0, 00:16:17.197 "zerocopy_threshold": 0 00:16:17.197 } 00:16:17.197 } 00:16:17.197 ] 00:16:17.197 }, 00:16:17.197 { 00:16:17.197 "subsystem": "vmd", 00:16:17.197 "config": [] 00:16:17.197 }, 00:16:17.197 { 00:16:17.197 "subsystem": "accel", 00:16:17.197 "config": [ 00:16:17.197 { 00:16:17.197 "method": "accel_set_options", 00:16:17.197 "params": { 00:16:17.197 "buf_count": 2048, 00:16:17.197 "large_cache_size": 16, 00:16:17.197 "sequence_count": 2048, 00:16:17.197 "small_cache_size": 128, 00:16:17.197 "task_count": 2048 00:16:17.197 } 00:16:17.197 } 00:16:17.197 ] 00:16:17.197 }, 00:16:17.197 { 00:16:17.197 "subsystem": "bdev", 00:16:17.197 "config": [ 00:16:17.197 { 00:16:17.197 "method": "bdev_set_options", 00:16:17.197 "params": { 00:16:17.197 "bdev_auto_examine": true, 00:16:17.197 "bdev_io_cache_size": 256, 00:16:17.197 "bdev_io_pool_size": 65535, 00:16:17.197 "iobuf_large_cache_size": 16, 00:16:17.197 "iobuf_small_cache_size": 128 00:16:17.197 } 00:16:17.197 }, 00:16:17.197 { 00:16:17.197 "method": "bdev_raid_set_options", 00:16:17.197 "params": { 00:16:17.197 "process_max_bandwidth_mb_sec": 0, 00:16:17.197 "process_window_size_kb": 1024 00:16:17.197 } 00:16:17.197 }, 00:16:17.197 { 00:16:17.197 "method": "bdev_iscsi_set_options", 00:16:17.197 "params": { 00:16:17.197 "timeout_sec": 30 00:16:17.197 } 00:16:17.197 }, 00:16:17.197 { 00:16:17.197 "method": "bdev_nvme_set_options", 00:16:17.197 "params": { 00:16:17.197 "action_on_timeout": "none", 00:16:17.197 "allow_accel_sequence": false, 00:16:17.197 "arbitration_burst": 0, 00:16:17.197 "bdev_retry_count": 3, 00:16:17.197 "ctrlr_loss_timeout_sec": 0, 00:16:17.197 "delay_cmd_submit": true, 00:16:17.197 "dhchap_dhgroups": [ 00:16:17.197 "null", 00:16:17.197 "ffdhe2048", 00:16:17.197 "ffdhe3072", 00:16:17.197 "ffdhe4096", 00:16:17.197 "ffdhe6144", 00:16:17.197 "ffdhe8192" 00:16:17.197 ], 00:16:17.197 "dhchap_digests": [ 00:16:17.197 "sha256", 00:16:17.197 "sha384", 00:16:17.197 "sha512" 00:16:17.197 ], 00:16:17.197 "disable_auto_failback": false, 00:16:17.197 "fast_io_fail_timeout_sec": 0, 00:16:17.197 "generate_uuids": false, 00:16:17.197 "high_priority_weight": 0, 00:16:17.197 "io_path_stat": false, 00:16:17.197 "io_queue_requests": 512, 00:16:17.197 "keep_alive_timeout_ms": 10000, 00:16:17.197 "low_priority_weight": 0, 00:16:17.197 "medium_priority_weight": 0, 00:16:17.197 "nvme_adminq_poll_period_us": 10000, 00:16:17.197 "nvme_error_stat": false, 00:16:17.197 "nvme_ioq_poll_period_us": 0, 00:16:17.197 "rdma_cm_event_timeout_ms": 0, 00:16:17.197 "rdma_max_cq_size": 0, 00:16:17.197 "rdma_srq_size": 0, 00:16:17.197 "reconnect_delay_sec": 0, 00:16:17.197 "timeout_admin_us": 0, 00:16:17.197 "timeout_us": 0, 00:16:17.197 "transport_ack_timeout": 0, 00:16:17.197 "transport_retry_count": 4, 00:16:17.197 "transport_tos": 0 00:16:17.198 } 00:16:17.198 }, 00:16:17.198 { 00:16:17.198 "method": "bdev_nvme_attach_controller", 00:16:17.198 "params": { 00:16:17.198 "adrfam": "IPv4", 00:16:17.198 "ctrlr_loss_timeout_sec": 0, 00:16:17.198 "ddgst": false, 00:16:17.198 "fast_io_fail_timeout_sec": 0, 00:16:17.198 "hdgst": false, 00:16:17.198 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:17.198 "multipath": "multipath", 00:16:17.198 "name": "nvme0", 00:16:17.198 "prchk_guard": false, 00:16:17.198 "prchk_reftag": false, 00:16:17.198 "psk": "key0", 00:16:17.198 "reconnect_delay_sec": 0, 00:16:17.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:17.198 "traddr": "10.0.0.3", 00:16:17.198 "trsvcid": "4420", 00:16:17.198 "trtype": "TCP" 00:16:17.198 } 00:16:17.198 }, 00:16:17.198 { 00:16:17.198 "method": "bdev_nvme_set_hotplug", 00:16:17.198 "params": { 00:16:17.198 "enable": false, 00:16:17.198 "period_us": 100000 00:16:17.198 } 00:16:17.198 }, 00:16:17.198 { 00:16:17.198 "method": "bdev_enable_histogram", 00:16:17.198 "params": { 00:16:17.198 "enable": true, 00:16:17.198 "name": "nvme0n1" 00:16:17.198 } 00:16:17.198 }, 00:16:17.198 { 00:16:17.198 "method": "bdev_wait_for_examine" 00:16:17.198 } 00:16:17.198 ] 00:16:17.198 }, 00:16:17.198 { 00:16:17.198 "subsystem": "nbd", 00:16:17.198 "config": [] 00:16:17.198 } 00:16:17.198 ] 00:16:17.198 }' 00:16:17.198 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 84426 00:16:17.198 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84426 ']' 00:16:17.198 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84426 00:16:17.198 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:17.198 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:17.198 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84426 00:16:17.198 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:17.198 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:17.198 killing process with pid 84426 00:16:17.198 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84426' 00:16:17.198 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84426 00:16:17.198 Received shutdown signal, test time was about 1.000000 seconds 00:16:17.198 00:16:17.198 Latency(us) 00:16:17.198 [2024-11-19T17:15:09.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.198 [2024-11-19T17:15:09.731Z] =================================================================================================================== 00:16:17.198 [2024-11-19T17:15:09.731Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:17.198 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84426 00:16:17.456 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 84389 00:16:17.456 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84389 ']' 00:16:17.456 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84389 00:16:17.456 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:17.456 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:17.456 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84389 00:16:17.456 killing process with pid 84389 00:16:17.456 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:17.456 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:17.456 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84389' 00:16:17.456 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84389 00:16:17.456 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84389 00:16:17.715 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:16:17.715 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:17.715 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:16:17.715 "subsystems": [ 00:16:17.715 { 00:16:17.715 "subsystem": "keyring", 00:16:17.715 "config": [ 00:16:17.715 { 00:16:17.715 "method": "keyring_file_add_key", 00:16:17.715 "params": { 00:16:17.715 "name": "key0", 00:16:17.715 "path": "/tmp/tmp.wVPZy7rVSS" 00:16:17.715 } 00:16:17.715 } 00:16:17.715 ] 00:16:17.715 }, 00:16:17.715 { 00:16:17.715 "subsystem": "iobuf", 00:16:17.715 "config": [ 00:16:17.715 { 00:16:17.715 "method": "iobuf_set_options", 00:16:17.715 "params": { 00:16:17.715 "enable_numa": false, 00:16:17.715 "large_bufsize": 135168, 00:16:17.715 "large_pool_count": 1024, 00:16:17.715 "small_bufsize": 8192, 00:16:17.715 "small_pool_count": 8192 00:16:17.715 } 00:16:17.715 } 00:16:17.715 ] 00:16:17.715 }, 00:16:17.715 { 00:16:17.715 "subsystem": "sock", 00:16:17.715 "config": [ 00:16:17.715 { 00:16:17.715 "method": "sock_set_default_impl", 00:16:17.715 "params": { 00:16:17.715 "impl_name": "posix" 00:16:17.715 } 00:16:17.715 }, 00:16:17.715 { 00:16:17.715 "method": "sock_impl_set_options", 00:16:17.715 "params": { 00:16:17.715 "enable_ktls": false, 00:16:17.715 "enable_placement_id": 0, 00:16:17.715 "enable_quickack": false, 00:16:17.715 "enable_recv_pipe": true, 00:16:17.715 "enable_zerocopy_send_client": false, 00:16:17.715 "enable_zerocopy_send_server": true, 00:16:17.715 "impl_name": "ssl", 00:16:17.715 "recv_buf_size": 4096, 00:16:17.715 "send_buf_size": 4096, 00:16:17.715 "tls_version": 0, 00:16:17.715 "zerocopy_threshold": 0 00:16:17.715 } 00:16:17.715 }, 00:16:17.715 { 00:16:17.715 "method": "sock_impl_set_options", 00:16:17.715 "params": { 00:16:17.715 "enable_ktls": false, 00:16:17.715 "enable_placement_id": 0, 00:16:17.715 "enable_quickack": false, 00:16:17.715 "enable_recv_pipe": true, 00:16:17.715 "enable_zerocopy_send_client": false, 00:16:17.715 "enable_zerocopy_send_server": true, 00:16:17.715 "impl_name": "posix", 00:16:17.715 "recv_buf_size": 2097152, 00:16:17.715 "send_buf_size": 2097152, 00:16:17.715 "tls_version": 0, 00:16:17.715 "zerocopy_threshold": 0 00:16:17.715 } 00:16:17.715 } 00:16:17.715 ] 00:16:17.715 }, 00:16:17.715 { 00:16:17.715 "subsystem": "vmd", 00:16:17.715 "config": [] 00:16:17.715 }, 00:16:17.715 { 00:16:17.715 "subsystem": "accel", 00:16:17.715 "config": [ 00:16:17.715 { 00:16:17.715 "method": "accel_set_options", 00:16:17.715 "params": { 00:16:17.715 "buf_count": 2048, 00:16:17.715 "large_cache_size": 16, 00:16:17.715 "sequence_count": 2048, 00:16:17.715 "small_cache_size": 128, 00:16:17.715 "task_count": 2048 00:16:17.715 } 00:16:17.715 } 00:16:17.715 ] 00:16:17.715 }, 00:16:17.715 { 00:16:17.715 "subsystem": "bdev", 00:16:17.715 "config": [ 00:16:17.715 { 00:16:17.715 "method": "bdev_set_options", 00:16:17.715 "params": { 00:16:17.715 "bdev_auto_examine": true, 00:16:17.715 "bdev_io_cache_size": 256, 00:16:17.715 "bdev_io_pool_size": 65535, 00:16:17.715 "iobuf_large_cache_size": 16, 00:16:17.715 "iobuf_small_cache_size": 128 00:16:17.715 } 00:16:17.715 }, 00:16:17.715 { 00:16:17.715 "method": "bdev_raid_set_options", 00:16:17.715 "params": { 00:16:17.715 "process_max_bandwidth_mb_sec": 0, 00:16:17.715 "process_window_size_kb": 1024 00:16:17.715 } 00:16:17.715 }, 00:16:17.715 { 00:16:17.715 "method": "bdev_iscsi_set_options", 00:16:17.715 "params": { 00:16:17.715 "timeout_sec": 30 00:16:17.715 } 00:16:17.715 }, 00:16:17.715 { 00:16:17.715 "method": "bdev_nvme_set_options", 00:16:17.715 "params": { 00:16:17.715 "action_on_timeout": "none", 00:16:17.715 "allow_accel_sequence": false, 00:16:17.715 "arbitration_burst": 0, 00:16:17.715 "bdev_retry_count": 3, 00:16:17.715 "ctrlr_loss_timeout_sec": 0, 00:16:17.715 "delay_cmd_submit": true, 00:16:17.715 "dhchap_dhgroups": [ 00:16:17.715 "null", 00:16:17.715 "ffdhe2048", 00:16:17.715 "ffdhe3072", 00:16:17.715 "ffdhe4096", 00:16:17.715 "ffdhe6144", 00:16:17.715 "ffdhe8192" 00:16:17.715 ], 00:16:17.715 "dhchap_digests": [ 00:16:17.715 "sha256", 00:16:17.715 "sha384", 00:16:17.715 "sha512" 00:16:17.715 ], 00:16:17.715 "disable_auto_failback": false, 00:16:17.715 "fast_io_fail_timeout_sec": 0, 00:16:17.715 "generate_uuids": false, 00:16:17.715 "high_priority_weight": 0, 00:16:17.715 "io_path_stat": false, 00:16:17.715 "io_queue_requests": 0, 00:16:17.715 "keep_alive_timeout_ms": 10000, 00:16:17.715 "low_priority_weight": 0, 00:16:17.715 "medium_priority_weight": 0, 00:16:17.715 "nvme_adminq_poll_period_us": 10000, 00:16:17.715 "nvme_error_stat": false, 00:16:17.715 "nvme_ioq_poll_period_us": 0, 00:16:17.715 "rdma_cm_event_timeout_ms": 0, 00:16:17.715 "rdma_max_cq_size": 0, 00:16:17.715 "rdma_srq_size": 0, 00:16:17.715 "reconnect_delay_sec": 0, 00:16:17.715 "timeout_admin_us": 0, 00:16:17.715 "timeout_us": 0, 00:16:17.715 "transport_ack_timeout": 0, 00:16:17.715 "transport_retry_count": 4, 00:16:17.715 "transport_tos": 0 00:16:17.715 } 00:16:17.715 }, 00:16:17.715 { 00:16:17.715 "method": "bdev_nvme_set_hotplug", 00:16:17.715 "params": { 00:16:17.715 "enable": false, 00:16:17.715 "period_us": 100000 00:16:17.715 } 00:16:17.715 }, 00:16:17.715 { 00:16:17.715 "method": "bdev_malloc_create", 00:16:17.715 "params": { 00:16:17.715 "block_size": 4096, 00:16:17.715 "dif_is_head_of_md": false, 00:16:17.715 "dif_pi_format": 0, 00:16:17.715 "dif_type": 0, 00:16:17.715 "md_size": 0, 00:16:17.715 "name": "malloc0", 00:16:17.715 "num_blocks": 8192, 00:16:17.715 "optimal_io_boundary": 0, 00:16:17.715 "physical_block_size": 4096, 00:16:17.715 "uuid": "0fc9b956-4cf1-4680-9264-ee97e138c306" 00:16:17.715 } 00:16:17.715 }, 00:16:17.715 { 00:16:17.715 "method": "bdev_wait_for_examine" 00:16:17.715 } 00:16:17.715 ] 00:16:17.715 }, 00:16:17.715 { 00:16:17.715 "subsystem": "nbd", 00:16:17.715 "config": [] 00:16:17.715 }, 00:16:17.715 { 00:16:17.715 "subsystem": "scheduler", 00:16:17.715 "config": [ 00:16:17.715 { 00:16:17.715 "method": "framework_set_scheduler", 00:16:17.715 "params": { 00:16:17.715 "name": "static" 00:16:17.715 } 00:16:17.715 } 00:16:17.715 ] 00:16:17.715 }, 00:16:17.715 { 00:16:17.715 "subsystem": "nvmf", 00:16:17.715 "config": [ 00:16:17.715 { 00:16:17.715 "method": "nvmf_set_config", 00:16:17.715 "params": { 00:16:17.715 "admin_cmd_passthru": { 00:16:17.715 "identify_ctrlr": false 00:16:17.715 }, 00:16:17.715 "dhchap_dhgroups": [ 00:16:17.715 "null", 00:16:17.715 "ffdhe2048", 00:16:17.715 "ffdhe3072", 00:16:17.715 "ffdhe4096", 00:16:17.715 "ffdhe6144", 00:16:17.715 "ffdhe8192" 00:16:17.715 ], 00:16:17.715 "dhchap_digests": [ 00:16:17.715 "sha256", 00:16:17.715 "sha384", 00:16:17.715 "sha512" 00:16:17.715 ], 00:16:17.715 "discovery_filter": "match_any" 00:16:17.715 } 00:16:17.715 }, 00:16:17.715 { 00:16:17.715 "method": "nvmf_set_max_subsystems", 00:16:17.715 "params": { 00:16:17.715 "max_subsystems": 1024 00:16:17.715 } 00:16:17.715 }, 00:16:17.715 { 00:16:17.715 "method": "nvmf_set_crdt", 00:16:17.715 "params": { 00:16:17.715 "crdt1": 0, 00:16:17.715 "crdt2": 0, 00:16:17.715 "crdt3": 0 00:16:17.715 } 00:16:17.715 }, 00:16:17.715 { 00:16:17.715 "method": "nvmf_create_transport", 00:16:17.715 "params": { 00:16:17.715 "abort_timeout_sec": 1, 00:16:17.715 "ack_timeout": 0, 00:16:17.715 "buf_cache_size": 4294967295, 00:16:17.715 "c2h_success": false, 00:16:17.715 "data_wr_pool_size": 0, 00:16:17.715 "dif_insert_or_strip": false, 00:16:17.716 "in_capsule_data_size": 4096, 00:16:17.716 "io_unit_size": 131072, 00:16:17.716 "max_aq_depth": 128, 00:16:17.716 "max_io_qpairs_per_ctrlr": 127, 00:16:17.716 "max_io_size": 131072, 00:16:17.716 "max_queue_depth": 128, 00:16:17.716 "num_shared_buffers": 511, 00:16:17.716 "sock_priority": 0, 00:16:17.716 "trtype": "TCP", 00:16:17.716 "zcopy": false 00:16:17.716 } 00:16:17.716 }, 00:16:17.716 { 00:16:17.716 "method": "nvmf_create_subsystem", 00:16:17.716 "params": { 00:16:17.716 "allow_any_host": false, 00:16:17.716 "ana_reporting": false, 00:16:17.716 "max_cntlid": 65519, 00:16:17.716 "max_namespaces": 32, 00:16:17.716 "min_cntlid": 1, 00:16:17.716 "model_number": "SPDK bdev Controller", 00:16:17.716 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:17.716 "serial_number": "00000000000000000000" 00:16:17.716 } 00:16:17.716 }, 00:16:17.716 { 00:16:17.716 "method": "nvmf_subsystem_add_host", 00:16:17.716 "params": { 00:16:17.716 "host": "nqn.2016-06.io.spdk:host1", 00:16:17.716 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:17.716 "psk": "key0" 00:16:17.716 } 00:16:17.716 }, 00:16:17.716 { 00:16:17.716 "method": "nvmf_subsystem_add_ns", 00:16:17.716 "params": { 00:16:17.716 "namespace": { 00:16:17.716 "bdev_name": "malloc0", 00:16:17.716 "nguid": "0FC9B9564CF146809264EE97E138C306", 00:16:17.716 "no_auto_visible": false, 00:16:17.716 "nsid": 1, 00:16:17.716 "uuid": "0fc9b956-4cf1-4680-9264-ee97e138c306" 00:16:17.716 }, 00:16:17.716 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:17.716 } 00:16:17.716 }, 00:16:17.716 { 00:16:17.716 "method": "nvmf_subsystem_add_listener", 00:16:17.716 "params": { 00:16:17.716 "listen_address": { 00:16:17.716 "adrfam": "IPv4", 00:16:17.716 "traddr": "10.0.0.3", 00:16:17.716 "trsvcid": "4420", 00:16:17.716 "trtype": "TCP" 00:16:17.716 }, 00:16:17.716 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:17.716 "secure_channel": false, 00:16:17.716 "sock_impl": "ssl" 00:16:17.716 } 00:16:17.716 } 00:16:17.716 ] 00:16:17.716 } 00:16:17.716 ] 00:16:17.716 }' 00:16:17.716 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:17.716 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:17.716 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84503 00:16:17.716 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84503 00:16:17.716 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84503 ']' 00:16:17.716 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:16:17.716 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.716 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:17.716 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.716 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:17.716 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:17.975 [2024-11-19 17:15:10.301038] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:16:17.975 [2024-11-19 17:15:10.301168] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.975 [2024-11-19 17:15:10.453976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.234 [2024-11-19 17:15:10.540507] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:18.234 [2024-11-19 17:15:10.540555] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:18.234 [2024-11-19 17:15:10.540564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:18.234 [2024-11-19 17:15:10.540572] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:18.234 [2024-11-19 17:15:10.540578] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:18.234 [2024-11-19 17:15:10.541175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.493 [2024-11-19 17:15:10.839246] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.493 [2024-11-19 17:15:10.871184] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:18.493 [2024-11-19 17:15:10.871425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:19.060 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:19.060 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:19.060 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:19.060 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:19.060 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:19.060 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:19.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:19.060 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=84547 00:16:19.060 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 84547 /var/tmp/bdevperf.sock 00:16:19.060 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84547 ']' 00:16:19.060 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:19.061 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:19.061 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:19.061 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:16:19.061 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:19.061 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:19.061 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:16:19.061 "subsystems": [ 00:16:19.061 { 00:16:19.061 "subsystem": "keyring", 00:16:19.061 "config": [ 00:16:19.061 { 00:16:19.061 "method": "keyring_file_add_key", 00:16:19.061 "params": { 00:16:19.061 "name": "key0", 00:16:19.061 "path": "/tmp/tmp.wVPZy7rVSS" 00:16:19.061 } 00:16:19.061 } 00:16:19.061 ] 00:16:19.061 }, 00:16:19.061 { 00:16:19.061 "subsystem": "iobuf", 00:16:19.061 "config": [ 00:16:19.061 { 00:16:19.061 "method": "iobuf_set_options", 00:16:19.061 "params": { 00:16:19.061 "enable_numa": false, 00:16:19.061 "large_bufsize": 135168, 00:16:19.061 "large_pool_count": 1024, 00:16:19.061 "small_bufsize": 8192, 00:16:19.061 "small_pool_count": 8192 00:16:19.061 } 00:16:19.061 } 00:16:19.061 ] 00:16:19.061 }, 00:16:19.061 { 00:16:19.061 "subsystem": "sock", 00:16:19.061 "config": [ 00:16:19.061 { 00:16:19.061 "method": "sock_set_default_impl", 00:16:19.061 "params": { 00:16:19.061 "impl_name": "posix" 00:16:19.061 } 00:16:19.061 }, 00:16:19.061 { 00:16:19.061 "method": "sock_impl_set_options", 00:16:19.061 "params": { 00:16:19.061 "enable_ktls": false, 00:16:19.061 "enable_placement_id": 0, 00:16:19.061 "enable_quickack": false, 00:16:19.061 "enable_recv_pipe": true, 00:16:19.061 "enable_zerocopy_send_client": false, 00:16:19.061 "enable_zerocopy_send_server": true, 00:16:19.061 "impl_name": "ssl", 00:16:19.061 "recv_buf_size": 4096, 00:16:19.061 "send_buf_size": 4096, 00:16:19.061 "tls_version": 0, 00:16:19.061 "zerocopy_threshold": 0 00:16:19.061 } 00:16:19.061 }, 00:16:19.061 { 00:16:19.061 "method": "sock_impl_set_options", 00:16:19.061 "params": { 00:16:19.061 "enable_ktls": false, 00:16:19.061 "enable_placement_id": 0, 00:16:19.061 "enable_quickack": false, 00:16:19.061 "enable_recv_pipe": true, 00:16:19.061 "enable_zerocopy_send_client": false, 00:16:19.061 "enable_zerocopy_send_server": true, 00:16:19.061 "impl_name": "posix", 00:16:19.061 "recv_buf_size": 2097152, 00:16:19.061 "send_buf_size": 2097152, 00:16:19.061 "tls_version": 0, 00:16:19.061 "zerocopy_threshold": 0 00:16:19.061 } 00:16:19.061 } 00:16:19.061 ] 00:16:19.061 }, 00:16:19.061 { 00:16:19.061 "subsystem": "vmd", 00:16:19.061 "config": [] 00:16:19.061 }, 00:16:19.061 { 00:16:19.061 "subsystem": "accel", 00:16:19.061 "config": [ 00:16:19.061 { 00:16:19.061 "method": "accel_set_options", 00:16:19.061 "params": { 00:16:19.061 "buf_count": 2048, 00:16:19.061 "large_cache_size": 16, 00:16:19.061 "sequence_count": 2048, 00:16:19.061 "small_cache_size": 128, 00:16:19.061 "task_count": 2048 00:16:19.061 } 00:16:19.061 } 00:16:19.061 ] 00:16:19.061 }, 00:16:19.061 { 00:16:19.061 "subsystem": "bdev", 00:16:19.061 "config": [ 00:16:19.061 { 00:16:19.061 "method": "bdev_set_options", 00:16:19.061 "params": { 00:16:19.061 "bdev_auto_examine": true, 00:16:19.061 "bdev_io_cache_size": 256, 00:16:19.061 "bdev_io_pool_size": 65535, 00:16:19.061 "iobuf_large_cache_size": 16, 00:16:19.061 "iobuf_small_cache_size": 128 00:16:19.061 } 00:16:19.061 }, 00:16:19.061 { 00:16:19.061 "method": "bdev_raid_set_options", 00:16:19.061 "params": { 00:16:19.061 "process_max_bandwidth_mb_sec": 0, 00:16:19.061 "process_window_size_kb": 1024 00:16:19.061 } 00:16:19.061 }, 00:16:19.061 { 00:16:19.061 "method": "bdev_iscsi_set_options", 00:16:19.061 "params": { 00:16:19.061 "timeout_sec": 30 00:16:19.061 } 00:16:19.061 }, 00:16:19.061 { 00:16:19.061 "method": "bdev_nvme_set_options", 00:16:19.061 "params": { 00:16:19.061 "action_on_timeout": "none", 00:16:19.061 "allow_accel_sequence": false, 00:16:19.061 "arbitration_burst": 0, 00:16:19.061 "bdev_retry_count": 3, 00:16:19.061 "ctrlr_loss_timeout_sec": 0, 00:16:19.061 "delay_cmd_submit": true, 00:16:19.061 "dhchap_dhgroups": [ 00:16:19.061 "null", 00:16:19.061 "ffdhe2048", 00:16:19.061 "ffdhe3072", 00:16:19.061 "ffdhe4096", 00:16:19.061 "ffdhe6144", 00:16:19.061 "ffdhe8192" 00:16:19.061 ], 00:16:19.061 "dhchap_digests": [ 00:16:19.061 "sha256", 00:16:19.061 "sha384", 00:16:19.061 "sha512" 00:16:19.061 ], 00:16:19.061 "disable_auto_failback": false, 00:16:19.061 "fast_io_fail_timeout_sec": 0, 00:16:19.061 "generate_uuids": false, 00:16:19.061 "high_priority_weight": 0, 00:16:19.061 "io_path_stat": false, 00:16:19.061 "io_queue_requests": 512, 00:16:19.061 "keep_alive_timeout_ms": 10000, 00:16:19.061 "low_priority_weight": 0, 00:16:19.061 "medium_priority_weight": 0, 00:16:19.061 "nvme_adminq_poll_period_us": 10000, 00:16:19.061 "nvme_error_stat": false, 00:16:19.061 "nvme_ioq_poll_period_us": 0, 00:16:19.061 "rdma_cm_event_timeout_ms": 0, 00:16:19.061 "rdma_max_cq_size": 0, 00:16:19.061 "rdma_srq_size": 0, 00:16:19.061 "reconnect_delay_sec": 0, 00:16:19.061 "timeout_admin_us": 0, 00:16:19.061 "timeout_us": 0, 00:16:19.061 "transport_ack_timeout": 0, 00:16:19.061 "transport_retry_count": 4, 00:16:19.061 "transport_tos": 0 00:16:19.061 } 00:16:19.061 }, 00:16:19.061 { 00:16:19.061 "method": "bdev_nvme_attach_controller", 00:16:19.061 "params": { 00:16:19.061 "adrfam": "IPv4", 00:16:19.061 "ctrlr_loss_timeout_sec": 0, 00:16:19.061 "ddgst": false, 00:16:19.061 "fast_io_fail_timeout_sec": 0, 00:16:19.061 "hdgst": false, 00:16:19.061 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:19.061 "multipath": "multipath", 00:16:19.061 "name": "nvme0", 00:16:19.061 "prchk_guard": false, 00:16:19.061 "prchk_reftag": false, 00:16:19.061 "psk": "key0", 00:16:19.061 "reconnect_delay_sec": 0, 00:16:19.061 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:19.061 "traddr": "10.0.0.3", 00:16:19.061 "trsvcid": "4420", 00:16:19.061 "trtype": "TCP" 00:16:19.061 } 00:16:19.061 }, 00:16:19.061 { 00:16:19.061 "method": "bdev_nvme_set_hotplug", 00:16:19.061 "params": { 00:16:19.061 "enable": false, 00:16:19.061 "period_us": 100000 00:16:19.061 } 00:16:19.061 }, 00:16:19.061 { 00:16:19.061 "method": "bdev_enable_histogram", 00:16:19.061 "params": { 00:16:19.061 "enable": true, 00:16:19.061 "name": "nvme0n1" 00:16:19.061 } 00:16:19.061 }, 00:16:19.061 { 00:16:19.061 "method": "bdev_wait_for_examine" 00:16:19.061 } 00:16:19.061 ] 00:16:19.061 }, 00:16:19.061 { 00:16:19.061 "subsystem": "nbd", 00:16:19.061 "config": [] 00:16:19.061 } 00:16:19.061 ] 00:16:19.061 }' 00:16:19.061 [2024-11-19 17:15:11.479934] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:16:19.061 [2024-11-19 17:15:11.480059] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84547 ] 00:16:19.320 [2024-11-19 17:15:11.640934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.320 [2024-11-19 17:15:11.715872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.583 [2024-11-19 17:15:11.901604] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:20.154 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:20.154 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:20.154 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:20.154 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:16:20.413 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.413 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:20.413 Running I/O for 1 seconds... 00:16:21.792 3605.00 IOPS, 14.08 MiB/s 00:16:21.792 Latency(us) 00:16:21.792 [2024-11-19T17:15:14.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.792 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:21.792 Verification LBA range: start 0x0 length 0x2000 00:16:21.792 nvme0n1 : 1.02 3652.28 14.27 0.00 0.00 34628.79 9353.77 27763.43 00:16:21.792 [2024-11-19T17:15:14.325Z] =================================================================================================================== 00:16:21.792 [2024-11-19T17:15:14.325Z] Total : 3652.28 14.27 0.00 0.00 34628.79 9353.77 27763.43 00:16:21.792 { 00:16:21.792 "results": [ 00:16:21.792 { 00:16:21.792 "job": "nvme0n1", 00:16:21.792 "core_mask": "0x2", 00:16:21.792 "workload": "verify", 00:16:21.792 "status": "finished", 00:16:21.792 "verify_range": { 00:16:21.792 "start": 0, 00:16:21.792 "length": 8192 00:16:21.792 }, 00:16:21.792 "queue_depth": 128, 00:16:21.792 "io_size": 4096, 00:16:21.792 "runtime": 1.022102, 00:16:21.792 "iops": 3652.277365664092, 00:16:21.792 "mibps": 14.26670845962536, 00:16:21.792 "io_failed": 0, 00:16:21.792 "io_timeout": 0, 00:16:21.792 "avg_latency_us": 34628.79276039257, 00:16:21.792 "min_latency_us": 9353.774545454546, 00:16:21.792 "max_latency_us": 27763.432727272728 00:16:21.792 } 00:16:21.792 ], 00:16:21.792 "core_count": 1 00:16:21.792 } 00:16:21.792 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:16:21.792 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:16:21.792 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:16:21.792 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:16:21.792 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:16:21.792 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:16:21.792 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:21.792 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:16:21.792 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:16:21.792 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:16:21.792 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:21.792 nvmf_trace.0 00:16:21.792 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:16:21.792 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 84547 00:16:21.792 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84547 ']' 00:16:21.792 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84547 00:16:21.792 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:21.792 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:21.792 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84547 00:16:21.792 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:21.792 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:21.792 killing process with pid 84547 00:16:21.792 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84547' 00:16:21.792 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84547 00:16:21.792 Received shutdown signal, test time was about 1.000000 seconds 00:16:21.792 00:16:21.792 Latency(us) 00:16:21.792 [2024-11-19T17:15:14.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.792 [2024-11-19T17:15:14.325Z] =================================================================================================================== 00:16:21.792 [2024-11-19T17:15:14.325Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:21.792 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84547 00:16:22.051 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:16:22.051 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:22.051 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:16:22.051 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:22.051 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:16:22.051 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:22.051 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:22.051 rmmod nvme_tcp 00:16:22.051 rmmod nvme_fabrics 00:16:22.051 rmmod nvme_keyring 00:16:22.051 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:22.051 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:16:22.051 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:16:22.051 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 84503 ']' 00:16:22.051 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 84503 00:16:22.051 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84503 ']' 00:16:22.051 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84503 00:16:22.051 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:22.051 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:22.051 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84503 00:16:22.051 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:22.051 killing process with pid 84503 00:16:22.051 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:22.051 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84503' 00:16:22.051 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84503 00:16:22.051 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84503 00:16:22.618 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:22.618 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:22.618 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:22.618 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:16:22.618 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:16:22.618 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:22.618 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:16:22.618 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:22.618 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:22.618 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:22.618 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:22.618 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:22.618 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:22.618 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:22.618 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:22.618 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:22.618 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:22.618 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:22.618 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:22.618 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:22.618 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:22.618 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:22.618 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:22.618 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.618 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:22.618 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.618 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:16:22.618 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.8jQW0zzZR3 /tmp/tmp.7CdtKKIpaD /tmp/tmp.wVPZy7rVSS 00:16:22.618 ************************************ 00:16:22.618 END TEST nvmf_tls 00:16:22.618 ************************************ 00:16:22.618 00:16:22.618 real 1m26.333s 00:16:22.618 user 2m18.487s 00:16:22.618 sys 0m29.476s 00:16:22.618 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:22.618 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:22.880 ************************************ 00:16:22.880 START TEST nvmf_fips 00:16:22.880 ************************************ 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:22.880 * Looking for test storage... 00:16:22.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:22.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.880 --rc genhtml_branch_coverage=1 00:16:22.880 --rc genhtml_function_coverage=1 00:16:22.880 --rc genhtml_legend=1 00:16:22.880 --rc geninfo_all_blocks=1 00:16:22.880 --rc geninfo_unexecuted_blocks=1 00:16:22.880 00:16:22.880 ' 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:22.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.880 --rc genhtml_branch_coverage=1 00:16:22.880 --rc genhtml_function_coverage=1 00:16:22.880 --rc genhtml_legend=1 00:16:22.880 --rc geninfo_all_blocks=1 00:16:22.880 --rc geninfo_unexecuted_blocks=1 00:16:22.880 00:16:22.880 ' 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:22.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.880 --rc genhtml_branch_coverage=1 00:16:22.880 --rc genhtml_function_coverage=1 00:16:22.880 --rc genhtml_legend=1 00:16:22.880 --rc geninfo_all_blocks=1 00:16:22.880 --rc geninfo_unexecuted_blocks=1 00:16:22.880 00:16:22.880 ' 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:22.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.880 --rc genhtml_branch_coverage=1 00:16:22.880 --rc genhtml_function_coverage=1 00:16:22.880 --rc genhtml_legend=1 00:16:22.880 --rc geninfo_all_blocks=1 00:16:22.880 --rc geninfo_unexecuted_blocks=1 00:16:22.880 00:16:22.880 ' 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.880 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:22.881 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:16:22.881 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:16:23.219 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:16:23.220 Error setting digest 00:16:23.220 40625800D87F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:16:23.220 40625800D87F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:23.220 Cannot find device "nvmf_init_br" 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:23.220 Cannot find device "nvmf_init_br2" 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:23.220 Cannot find device "nvmf_tgt_br" 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:23.220 Cannot find device "nvmf_tgt_br2" 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:23.220 Cannot find device "nvmf_init_br" 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:23.220 Cannot find device "nvmf_init_br2" 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:23.220 Cannot find device "nvmf_tgt_br" 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:23.220 Cannot find device "nvmf_tgt_br2" 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:23.220 Cannot find device "nvmf_br" 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:23.220 Cannot find device "nvmf_init_if" 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:23.220 Cannot find device "nvmf_init_if2" 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:23.220 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:23.220 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:23.220 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:23.479 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:23.479 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:16:23.479 00:16:23.479 --- 10.0.0.3 ping statistics --- 00:16:23.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.479 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:23.479 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:23.479 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:16:23.479 00:16:23.479 --- 10.0.0.4 ping statistics --- 00:16:23.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.479 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:23.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:16:23.479 00:16:23.479 --- 10.0.0.1 ping statistics --- 00:16:23.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.479 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:23.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:16:23.479 00:16:23.479 --- 10.0.0.2 ping statistics --- 00:16:23.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.479 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=84887 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 84887 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 84887 ']' 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:23.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:23.479 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:23.738 [2024-11-19 17:15:16.073185] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:16:23.738 [2024-11-19 17:15:16.074050] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.738 [2024-11-19 17:15:16.233218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.997 [2024-11-19 17:15:16.296360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.997 [2024-11-19 17:15:16.296419] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.997 [2024-11-19 17:15:16.296429] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:23.997 [2024-11-19 17:15:16.296437] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:23.997 [2024-11-19 17:15:16.296443] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.997 [2024-11-19 17:15:16.296909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.933 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.933 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:16:24.933 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:24.933 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:24.933 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:24.933 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.933 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:16:24.933 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:24.933 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:16:24.933 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.r0z 00:16:24.933 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:24.933 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.r0z 00:16:24.933 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.r0z 00:16:24.933 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.r0z 00:16:24.933 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:25.193 [2024-11-19 17:15:17.515311] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.193 [2024-11-19 17:15:17.531214] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:25.193 [2024-11-19 17:15:17.531460] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:25.193 malloc0 00:16:25.193 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:25.193 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=84941 00:16:25.193 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:25.193 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 84941 /var/tmp/bdevperf.sock 00:16:25.193 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 84941 ']' 00:16:25.193 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:25.193 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:25.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:25.193 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:25.193 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:25.193 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:25.193 [2024-11-19 17:15:17.706096] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:16:25.193 [2024-11-19 17:15:17.706211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84941 ] 00:16:25.451 [2024-11-19 17:15:17.865545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.451 [2024-11-19 17:15:17.953890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.385 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.385 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:16:26.385 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.r0z 00:16:26.644 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:26.902 [2024-11-19 17:15:19.288839] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:26.902 TLSTESTn1 00:16:26.902 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:27.159 Running I/O for 10 seconds... 00:16:29.030 3543.00 IOPS, 13.84 MiB/s [2024-11-19T17:15:22.939Z] 3908.00 IOPS, 15.27 MiB/s [2024-11-19T17:15:23.875Z] 4003.67 IOPS, 15.64 MiB/s [2024-11-19T17:15:24.812Z] 4054.25 IOPS, 15.84 MiB/s [2024-11-19T17:15:25.748Z] 4111.20 IOPS, 16.06 MiB/s [2024-11-19T17:15:26.683Z] 4162.33 IOPS, 16.26 MiB/s [2024-11-19T17:15:27.620Z] 4201.43 IOPS, 16.41 MiB/s [2024-11-19T17:15:28.553Z] 4231.75 IOPS, 16.53 MiB/s [2024-11-19T17:15:29.931Z] 4221.33 IOPS, 16.49 MiB/s [2024-11-19T17:15:29.931Z] 4212.80 IOPS, 16.46 MiB/s 00:16:37.398 Latency(us) 00:16:37.398 [2024-11-19T17:15:29.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.398 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:37.398 Verification LBA range: start 0x0 length 0x2000 00:16:37.398 TLSTESTn1 : 10.02 4218.89 16.48 0.00 0.00 30285.98 5123.72 24069.59 00:16:37.398 [2024-11-19T17:15:29.931Z] =================================================================================================================== 00:16:37.398 [2024-11-19T17:15:29.931Z] Total : 4218.89 16.48 0.00 0.00 30285.98 5123.72 24069.59 00:16:37.398 { 00:16:37.398 "results": [ 00:16:37.398 { 00:16:37.398 "job": "TLSTESTn1", 00:16:37.398 "core_mask": "0x4", 00:16:37.399 "workload": "verify", 00:16:37.399 "status": "finished", 00:16:37.399 "verify_range": { 00:16:37.399 "start": 0, 00:16:37.399 "length": 8192 00:16:37.399 }, 00:16:37.399 "queue_depth": 128, 00:16:37.399 "io_size": 4096, 00:16:37.399 "runtime": 10.015657, 00:16:37.399 "iops": 4218.8944769174905, 00:16:37.399 "mibps": 16.480056550458947, 00:16:37.399 "io_failed": 0, 00:16:37.399 "io_timeout": 0, 00:16:37.399 "avg_latency_us": 30285.9846481858, 00:16:37.399 "min_latency_us": 5123.723636363637, 00:16:37.399 "max_latency_us": 24069.585454545453 00:16:37.399 } 00:16:37.399 ], 00:16:37.399 "core_count": 1 00:16:37.399 } 00:16:37.399 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:16:37.399 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:16:37.399 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:16:37.399 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:16:37.399 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:16:37.399 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:37.399 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:16:37.399 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:16:37.399 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:16:37.399 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:37.399 nvmf_trace.0 00:16:37.399 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:16:37.399 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 84941 00:16:37.399 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 84941 ']' 00:16:37.399 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 84941 00:16:37.399 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:16:37.399 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:37.399 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84941 00:16:37.399 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:37.399 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:37.399 killing process with pid 84941 00:16:37.399 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84941' 00:16:37.399 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 84941 00:16:37.399 Received shutdown signal, test time was about 10.000000 seconds 00:16:37.399 00:16:37.399 Latency(us) 00:16:37.399 [2024-11-19T17:15:29.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.399 [2024-11-19T17:15:29.932Z] =================================================================================================================== 00:16:37.399 [2024-11-19T17:15:29.932Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:37.399 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 84941 00:16:37.658 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:16:37.658 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:37.658 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:16:37.658 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:37.658 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:16:37.658 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:37.658 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:37.658 rmmod nvme_tcp 00:16:37.658 rmmod nvme_fabrics 00:16:37.658 rmmod nvme_keyring 00:16:37.658 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:37.658 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:16:37.658 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:16:37.658 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 84887 ']' 00:16:37.658 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 84887 00:16:37.658 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 84887 ']' 00:16:37.658 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 84887 00:16:37.658 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:16:37.658 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:37.658 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84887 00:16:37.916 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:37.916 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:37.916 killing process with pid 84887 00:16:37.916 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84887' 00:16:37.916 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 84887 00:16:37.916 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 84887 00:16:38.176 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:38.176 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:38.176 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:38.176 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:16:38.176 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:38.176 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:16:38.176 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:16:38.176 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:38.176 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:38.176 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:38.176 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:38.176 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:38.176 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:38.176 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:38.176 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:38.176 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:38.176 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:38.176 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:38.176 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:38.176 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:38.176 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:38.176 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:38.435 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:38.435 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.435 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:38.435 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.435 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:16:38.435 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.r0z 00:16:38.435 00:16:38.435 real 0m15.587s 00:16:38.435 user 0m21.881s 00:16:38.435 sys 0m6.023s 00:16:38.435 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.435 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:38.435 ************************************ 00:16:38.435 END TEST nvmf_fips 00:16:38.435 ************************************ 00:16:38.435 17:15:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:16:38.435 17:15:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:38.435 17:15:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:38.435 17:15:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:38.435 ************************************ 00:16:38.435 START TEST nvmf_control_msg_list 00:16:38.435 ************************************ 00:16:38.435 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:16:38.435 * Looking for test storage... 00:16:38.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:38.435 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:38.435 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:16:38.435 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:38.695 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:38.695 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:38.695 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:38.695 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:38.695 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:16:38.695 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:16:38.695 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:16:38.695 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:16:38.695 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:16:38.695 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:16:38.695 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:16:38.695 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:38.695 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:16:38.695 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:16:38.695 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:38.695 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:38.695 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:16:38.695 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:16:38.695 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:38.695 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:16:38.695 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:16:38.695 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:16:38.695 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:16:38.695 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:38.695 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:16:38.695 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:16:38.695 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:38.695 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:38.695 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:16:38.695 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:38.695 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:38.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.695 --rc genhtml_branch_coverage=1 00:16:38.695 --rc genhtml_function_coverage=1 00:16:38.695 --rc genhtml_legend=1 00:16:38.695 --rc geninfo_all_blocks=1 00:16:38.695 --rc geninfo_unexecuted_blocks=1 00:16:38.695 00:16:38.695 ' 00:16:38.695 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:38.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.695 --rc genhtml_branch_coverage=1 00:16:38.695 --rc genhtml_function_coverage=1 00:16:38.695 --rc genhtml_legend=1 00:16:38.695 --rc geninfo_all_blocks=1 00:16:38.695 --rc geninfo_unexecuted_blocks=1 00:16:38.695 00:16:38.695 ' 00:16:38.695 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:38.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.695 --rc genhtml_branch_coverage=1 00:16:38.695 --rc genhtml_function_coverage=1 00:16:38.695 --rc genhtml_legend=1 00:16:38.695 --rc geninfo_all_blocks=1 00:16:38.695 --rc geninfo_unexecuted_blocks=1 00:16:38.695 00:16:38.695 ' 00:16:38.695 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:38.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.695 --rc genhtml_branch_coverage=1 00:16:38.695 --rc genhtml_function_coverage=1 00:16:38.695 --rc genhtml_legend=1 00:16:38.695 --rc geninfo_all_blocks=1 00:16:38.695 --rc geninfo_unexecuted_blocks=1 00:16:38.695 00:16:38.695 ' 00:16:38.695 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:38.695 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:16:38.695 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.695 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.695 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.695 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.695 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.695 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.695 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.695 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.695 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.695 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:38.696 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:38.696 Cannot find device "nvmf_init_br" 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:38.696 Cannot find device "nvmf_init_br2" 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:38.696 Cannot find device "nvmf_tgt_br" 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:38.696 Cannot find device "nvmf_tgt_br2" 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:38.696 Cannot find device "nvmf_init_br" 00:16:38.696 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:16:38.697 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:38.697 Cannot find device "nvmf_init_br2" 00:16:38.697 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:16:38.697 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:38.697 Cannot find device "nvmf_tgt_br" 00:16:38.697 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:16:38.697 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:38.697 Cannot find device "nvmf_tgt_br2" 00:16:38.697 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:16:38.697 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:38.697 Cannot find device "nvmf_br" 00:16:38.697 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:16:38.697 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:38.697 Cannot find device "nvmf_init_if" 00:16:38.697 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:16:38.697 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:38.697 Cannot find device "nvmf_init_if2" 00:16:38.697 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:16:38.697 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:38.697 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:38.697 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:16:38.697 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:38.697 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:38.697 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:16:38.697 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:38.697 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:38.697 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:38.697 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:38.697 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:38.956 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:38.956 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:38.956 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:38.956 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:38.956 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:38.956 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:38.956 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:38.956 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:38.956 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:38.957 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:38.957 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:16:38.957 00:16:38.957 --- 10.0.0.3 ping statistics --- 00:16:38.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.957 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:38.957 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:38.957 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:16:38.957 00:16:38.957 --- 10.0.0.4 ping statistics --- 00:16:38.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.957 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:38.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:38.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:38.957 00:16:38.957 --- 10.0.0.1 ping statistics --- 00:16:38.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.957 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:38.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:38.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:16:38.957 00:16:38.957 --- 10.0.0.2 ping statistics --- 00:16:38.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.957 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:38.957 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:39.216 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=85359 00:16:39.216 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 85359 00:16:39.216 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:39.216 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 85359 ']' 00:16:39.216 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.216 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:39.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.216 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.216 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:39.216 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:39.216 [2024-11-19 17:15:31.556526] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:16:39.216 [2024-11-19 17:15:31.556655] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.216 [2024-11-19 17:15:31.712388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.475 [2024-11-19 17:15:31.788408] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.475 [2024-11-19 17:15:31.788479] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.475 [2024-11-19 17:15:31.788504] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:39.475 [2024-11-19 17:15:31.788515] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:39.475 [2024-11-19 17:15:31.788524] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.475 [2024-11-19 17:15:31.789090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:40.412 [2024-11-19 17:15:32.676303] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:40.412 Malloc0 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:40.412 [2024-11-19 17:15:32.719152] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=85409 00:16:40.412 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:40.413 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=85410 00:16:40.413 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:40.413 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=85411 00:16:40.413 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 85409 00:16:40.413 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:40.413 [2024-11-19 17:15:32.907793] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:40.413 [2024-11-19 17:15:32.908119] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:40.413 [2024-11-19 17:15:32.917665] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:41.790 Initializing NVMe Controllers 00:16:41.790 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:41.790 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:16:41.790 Initialization complete. Launching workers. 00:16:41.790 ======================================================== 00:16:41.790 Latency(us) 00:16:41.790 Device Information : IOPS MiB/s Average min max 00:16:41.790 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3276.00 12.80 304.85 155.82 703.17 00:16:41.790 ======================================================== 00:16:41.790 Total : 3276.00 12.80 304.85 155.82 703.17 00:16:41.790 00:16:41.790 Initializing NVMe Controllers 00:16:41.791 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:41.791 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:16:41.791 Initialization complete. Launching workers. 00:16:41.791 ======================================================== 00:16:41.791 Latency(us) 00:16:41.791 Device Information : IOPS MiB/s Average min max 00:16:41.791 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3235.00 12.64 308.85 168.38 1938.24 00:16:41.791 ======================================================== 00:16:41.791 Total : 3235.00 12.64 308.85 168.38 1938.24 00:16:41.791 00:16:41.791 Initializing NVMe Controllers 00:16:41.791 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:41.791 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:16:41.791 Initialization complete. Launching workers. 00:16:41.791 ======================================================== 00:16:41.791 Latency(us) 00:16:41.791 Device Information : IOPS MiB/s Average min max 00:16:41.791 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3253.95 12.71 306.88 122.36 488.52 00:16:41.791 ======================================================== 00:16:41.791 Total : 3253.95 12.71 306.88 122.36 488.52 00:16:41.791 00:16:41.791 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 85410 00:16:41.791 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 85411 00:16:41.791 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:41.791 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:16:41.791 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:41.791 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:16:41.791 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:41.791 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:16:41.791 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:41.791 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:41.791 rmmod nvme_tcp 00:16:41.791 rmmod nvme_fabrics 00:16:41.791 rmmod nvme_keyring 00:16:41.791 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:41.791 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:16:41.791 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:16:41.791 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 85359 ']' 00:16:41.791 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 85359 00:16:41.791 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 85359 ']' 00:16:41.791 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 85359 00:16:41.791 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:16:41.791 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:41.791 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85359 00:16:41.791 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:41.791 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:41.791 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85359' 00:16:41.791 killing process with pid 85359 00:16:41.791 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 85359 00:16:41.791 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 85359 00:16:42.050 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:42.050 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:42.050 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:42.050 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:16:42.050 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:16:42.050 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:42.050 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:16:42.050 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:42.050 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:42.050 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:42.050 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:42.050 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:42.050 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:42.050 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:42.050 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:42.050 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:42.050 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:42.050 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:42.050 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:42.050 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:42.310 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:42.310 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:42.310 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:42.310 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.310 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:42.310 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.310 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:16:42.310 00:16:42.310 real 0m3.871s 00:16:42.310 user 0m5.689s 00:16:42.310 sys 0m1.582s 00:16:42.310 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:42.310 ************************************ 00:16:42.310 END TEST nvmf_control_msg_list 00:16:42.310 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:42.310 ************************************ 00:16:42.310 17:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:16:42.310 17:15:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:42.310 17:15:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:42.310 17:15:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:42.310 ************************************ 00:16:42.310 START TEST nvmf_wait_for_buf 00:16:42.310 ************************************ 00:16:42.310 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:16:42.310 * Looking for test storage... 00:16:42.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:42.310 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:42.310 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:16:42.310 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:42.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.570 --rc genhtml_branch_coverage=1 00:16:42.570 --rc genhtml_function_coverage=1 00:16:42.570 --rc genhtml_legend=1 00:16:42.570 --rc geninfo_all_blocks=1 00:16:42.570 --rc geninfo_unexecuted_blocks=1 00:16:42.570 00:16:42.570 ' 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:42.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.570 --rc genhtml_branch_coverage=1 00:16:42.570 --rc genhtml_function_coverage=1 00:16:42.570 --rc genhtml_legend=1 00:16:42.570 --rc geninfo_all_blocks=1 00:16:42.570 --rc geninfo_unexecuted_blocks=1 00:16:42.570 00:16:42.570 ' 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:42.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.570 --rc genhtml_branch_coverage=1 00:16:42.570 --rc genhtml_function_coverage=1 00:16:42.570 --rc genhtml_legend=1 00:16:42.570 --rc geninfo_all_blocks=1 00:16:42.570 --rc geninfo_unexecuted_blocks=1 00:16:42.570 00:16:42.570 ' 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:42.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.570 --rc genhtml_branch_coverage=1 00:16:42.570 --rc genhtml_function_coverage=1 00:16:42.570 --rc genhtml_legend=1 00:16:42.570 --rc geninfo_all_blocks=1 00:16:42.570 --rc geninfo_unexecuted_blocks=1 00:16:42.570 00:16:42.570 ' 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:42.570 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:42.571 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:42.571 Cannot find device "nvmf_init_br" 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:42.571 Cannot find device "nvmf_init_br2" 00:16:42.571 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:16:42.571 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:42.571 Cannot find device "nvmf_tgt_br" 00:16:42.571 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:16:42.571 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:42.571 Cannot find device "nvmf_tgt_br2" 00:16:42.571 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:16:42.571 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:42.571 Cannot find device "nvmf_init_br" 00:16:42.571 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:16:42.571 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:42.571 Cannot find device "nvmf_init_br2" 00:16:42.571 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:16:42.571 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:42.571 Cannot find device "nvmf_tgt_br" 00:16:42.571 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:16:42.571 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:42.571 Cannot find device "nvmf_tgt_br2" 00:16:42.571 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:16:42.571 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:42.571 Cannot find device "nvmf_br" 00:16:42.571 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:16:42.571 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:42.571 Cannot find device "nvmf_init_if" 00:16:42.571 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:16:42.572 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:42.830 Cannot find device "nvmf_init_if2" 00:16:42.830 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:42.831 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:42.831 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:42.831 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:42.831 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:16:42.831 00:16:42.831 --- 10.0.0.3 ping statistics --- 00:16:42.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.831 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:42.831 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:42.831 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:16:42.831 00:16:42.831 --- 10.0.0.4 ping statistics --- 00:16:42.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.831 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:42.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:42.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:16:42.831 00:16:42.831 --- 10.0.0.1 ping statistics --- 00:16:42.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.831 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:16:42.831 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:43.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:43.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:16:43.090 00:16:43.090 --- 10.0.0.2 ping statistics --- 00:16:43.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.090 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:16:43.090 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:43.090 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:16:43.090 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:43.090 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:43.090 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:43.090 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:43.090 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:43.090 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:43.090 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:43.090 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:16:43.090 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:43.090 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:43.090 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:43.090 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=85652 00:16:43.090 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 85652 00:16:43.090 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:43.090 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 85652 ']' 00:16:43.090 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.090 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.090 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.090 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.090 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:43.090 [2024-11-19 17:15:35.464050] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:16:43.090 [2024-11-19 17:15:35.464175] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.090 [2024-11-19 17:15:35.616649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.350 [2024-11-19 17:15:35.699648] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.350 [2024-11-19 17:15:35.699724] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.350 [2024-11-19 17:15:35.699746] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.350 [2024-11-19 17:15:35.699756] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.350 [2024-11-19 17:15:35.699764] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.350 [2024-11-19 17:15:35.700264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:44.288 Malloc0 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:44.288 [2024-11-19 17:15:36.766341] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:44.288 [2024-11-19 17:15:36.790497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:44.288 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.289 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:44.547 [2024-11-19 17:15:37.000976] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:45.924 Initializing NVMe Controllers 00:16:45.924 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:45.924 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:16:45.924 Initialization complete. Launching workers. 00:16:45.924 ======================================================== 00:16:45.924 Latency(us) 00:16:45.924 Device Information : IOPS MiB/s Average min max 00:16:45.924 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 127.00 15.88 32885.96 8003.98 66044.27 00:16:45.924 ======================================================== 00:16:45.924 Total : 127.00 15.88 32885.96 8003.98 66044.27 00:16:45.924 00:16:45.924 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:16:45.924 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:16:45.924 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.924 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:45.924 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.924 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2006 00:16:45.924 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2006 -eq 0 ]] 00:16:45.924 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:45.924 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:16:45.924 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:45.924 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:16:45.924 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:45.924 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:16:45.924 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:45.924 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:46.183 rmmod nvme_tcp 00:16:46.183 rmmod nvme_fabrics 00:16:46.183 rmmod nvme_keyring 00:16:46.183 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:46.183 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:16:46.183 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:16:46.183 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 85652 ']' 00:16:46.183 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 85652 00:16:46.183 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 85652 ']' 00:16:46.183 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 85652 00:16:46.183 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:16:46.183 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:46.183 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85652 00:16:46.183 killing process with pid 85652 00:16:46.183 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:46.183 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:46.183 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85652' 00:16:46.183 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 85652 00:16:46.183 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 85652 00:16:46.442 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:46.442 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:46.442 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:46.442 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:16:46.442 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:16:46.442 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:46.442 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:16:46.442 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:46.442 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:46.442 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:46.442 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:46.442 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:46.442 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:46.442 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:46.442 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:46.442 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:46.442 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:46.442 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:46.442 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:46.442 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:46.442 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:46.701 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:46.701 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:46.701 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.701 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:46.701 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.701 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:16:46.701 ************************************ 00:16:46.701 END TEST nvmf_wait_for_buf 00:16:46.701 ************************************ 00:16:46.701 00:16:46.701 real 0m4.310s 00:16:46.701 user 0m3.866s 00:16:46.701 sys 0m0.937s 00:16:46.701 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:46.701 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:46.701 17:15:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:16:46.701 17:15:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:16:46.701 17:15:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:16:46.701 17:15:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:46.701 17:15:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.701 17:15:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:46.701 ************************************ 00:16:46.701 START TEST nvmf_nsid 00:16:46.701 ************************************ 00:16:46.701 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:16:46.701 * Looking for test storage... 00:16:46.701 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:46.701 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:46.701 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:16:46.701 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:46.959 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:46.959 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:46.959 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:46.959 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:46.959 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:16:46.959 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:16:46.959 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:16:46.959 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:16:46.959 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:16:46.959 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:16:46.959 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:16:46.959 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:46.959 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:16:46.959 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:16:46.959 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:46.959 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:46.959 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:16:46.959 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:16:46.959 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:46.959 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:16:46.959 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:46.959 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:16:46.959 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:16:46.959 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:46.959 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:16:46.959 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:46.959 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:46.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.960 --rc genhtml_branch_coverage=1 00:16:46.960 --rc genhtml_function_coverage=1 00:16:46.960 --rc genhtml_legend=1 00:16:46.960 --rc geninfo_all_blocks=1 00:16:46.960 --rc geninfo_unexecuted_blocks=1 00:16:46.960 00:16:46.960 ' 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:46.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.960 --rc genhtml_branch_coverage=1 00:16:46.960 --rc genhtml_function_coverage=1 00:16:46.960 --rc genhtml_legend=1 00:16:46.960 --rc geninfo_all_blocks=1 00:16:46.960 --rc geninfo_unexecuted_blocks=1 00:16:46.960 00:16:46.960 ' 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:46.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.960 --rc genhtml_branch_coverage=1 00:16:46.960 --rc genhtml_function_coverage=1 00:16:46.960 --rc genhtml_legend=1 00:16:46.960 --rc geninfo_all_blocks=1 00:16:46.960 --rc geninfo_unexecuted_blocks=1 00:16:46.960 00:16:46.960 ' 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:46.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.960 --rc genhtml_branch_coverage=1 00:16:46.960 --rc genhtml_function_coverage=1 00:16:46.960 --rc genhtml_legend=1 00:16:46.960 --rc geninfo_all_blocks=1 00:16:46.960 --rc geninfo_unexecuted_blocks=1 00:16:46.960 00:16:46.960 ' 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:46.960 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:46.960 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:46.961 Cannot find device "nvmf_init_br" 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:46.961 Cannot find device "nvmf_init_br2" 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:46.961 Cannot find device "nvmf_tgt_br" 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:46.961 Cannot find device "nvmf_tgt_br2" 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:46.961 Cannot find device "nvmf_init_br" 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:46.961 Cannot find device "nvmf_init_br2" 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:46.961 Cannot find device "nvmf_tgt_br" 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:46.961 Cannot find device "nvmf_tgt_br2" 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:46.961 Cannot find device "nvmf_br" 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:46.961 Cannot find device "nvmf_init_if" 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:46.961 Cannot find device "nvmf_init_if2" 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:46.961 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:46.961 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:46.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:47.227 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:47.227 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:16:47.227 00:16:47.227 --- 10.0.0.3 ping statistics --- 00:16:47.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.227 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:47.227 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:47.227 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:16:47.227 00:16:47.227 --- 10.0.0.4 ping statistics --- 00:16:47.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.227 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:16:47.227 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:47.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:47.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:16:47.227 00:16:47.227 --- 10.0.0.1 ping statistics --- 00:16:47.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.227 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:16:47.510 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:47.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:47.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:16:47.510 00:16:47.510 --- 10.0.0.2 ping statistics --- 00:16:47.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.510 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:16:47.510 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:47.510 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:16:47.510 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:47.510 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:47.510 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:47.510 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:47.510 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:47.510 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:47.510 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:47.510 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:16:47.510 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:47.510 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:47.510 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:47.510 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=85965 00:16:47.510 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:16:47.510 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 85965 00:16:47.510 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 85965 ']' 00:16:47.510 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.510 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.510 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.510 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.510 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:47.510 [2024-11-19 17:15:39.853783] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:16:47.510 [2024-11-19 17:15:39.854052] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:47.510 [2024-11-19 17:15:40.002910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.769 [2024-11-19 17:15:40.086215] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:47.769 [2024-11-19 17:15:40.086290] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:47.769 [2024-11-19 17:15:40.086319] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:47.769 [2024-11-19 17:15:40.086327] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:47.769 [2024-11-19 17:15:40.086333] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:47.769 [2024-11-19 17:15:40.086832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.769 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.769 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:16:47.769 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:47.769 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:47.769 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=85990 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=8785d1b1-51fc-420c-8057-8dd8d6c5e2f0 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=5699b762-b22d-4f12-8c63-54d13ba2257e 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=f883395b-2771-4596-9f39-6ca867184fa1 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:48.028 null0 00:16:48.028 null1 00:16:48.028 null2 00:16:48.028 [2024-11-19 17:15:40.371596] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:48.028 [2024-11-19 17:15:40.382122] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:16:48.028 [2024-11-19 17:15:40.382215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85990 ] 00:16:48.028 [2024-11-19 17:15:40.395735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:48.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 85990 /var/tmp/tgt2.sock 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 85990 ']' 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:48.028 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:48.028 [2024-11-19 17:15:40.534062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.287 [2024-11-19 17:15:40.598931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.546 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:48.546 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:16:48.546 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:16:49.114 [2024-11-19 17:15:41.378020] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.114 [2024-11-19 17:15:41.394093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:16:49.114 nvme0n1 nvme0n2 00:16:49.114 nvme1n1 00:16:49.114 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:16:49.114 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:16:49.114 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:16:49.114 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:16:49.114 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:16:49.114 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:16:49.114 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:16:49.114 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:16:49.114 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:16:49.114 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:16:49.114 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:16:49.114 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:49.114 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:49.114 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:16:49.114 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:16:49.114 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 8785d1b1-51fc-420c-8057-8dd8d6c5e2f0 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8785d1b151fc420c80578dd8d6c5e2f0 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8785D1B151FC420C80578DD8D6C5E2F0 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 8785D1B151FC420C80578DD8D6C5E2F0 == \8\7\8\5\D\1\B\1\5\1\F\C\4\2\0\C\8\0\5\7\8\D\D\8\D\6\C\5\E\2\F\0 ]] 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 5699b762-b22d-4f12-8c63-54d13ba2257e 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=5699b762b22d4f128c6354d13ba2257e 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 5699B762B22D4F128C6354D13BA2257E 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 5699B762B22D4F128C6354D13BA2257E == \5\6\9\9\B\7\6\2\B\2\2\D\4\F\1\2\8\C\6\3\5\4\D\1\3\B\A\2\2\5\7\E ]] 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid f883395b-2771-4596-9f39-6ca867184fa1 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=f883395b277145969f396ca867184fa1 00:16:50.490 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo F883395B277145969F396CA867184FA1 00:16:50.491 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ F883395B277145969F396CA867184FA1 == \F\8\8\3\3\9\5\B\2\7\7\1\4\5\9\6\9\F\3\9\6\C\A\8\6\7\1\8\4\F\A\1 ]] 00:16:50.491 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:16:50.491 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:16:50.491 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:16:50.491 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 85990 00:16:50.491 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 85990 ']' 00:16:50.491 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 85990 00:16:50.491 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:16:50.491 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:50.491 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85990 00:16:50.749 killing process with pid 85990 00:16:50.749 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:50.749 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:50.749 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85990' 00:16:50.749 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 85990 00:16:50.749 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 85990 00:16:51.315 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:16:51.315 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:51.315 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:16:51.315 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:51.315 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:16:51.315 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:51.315 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:51.315 rmmod nvme_tcp 00:16:51.315 rmmod nvme_fabrics 00:16:51.315 rmmod nvme_keyring 00:16:51.315 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:51.315 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:16:51.315 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:16:51.315 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 85965 ']' 00:16:51.315 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 85965 00:16:51.315 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 85965 ']' 00:16:51.315 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 85965 00:16:51.315 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:16:51.315 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:51.315 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85965 00:16:51.315 killing process with pid 85965 00:16:51.315 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:51.315 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:51.315 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85965' 00:16:51.315 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 85965 00:16:51.315 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 85965 00:16:51.575 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:51.575 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:51.575 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:51.575 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:16:51.575 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:51.575 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:16:51.575 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:16:51.575 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:51.575 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:51.575 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:51.575 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:51.575 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:51.575 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:51.575 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:51.575 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:51.575 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:51.575 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:51.575 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:51.575 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:51.575 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:51.837 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:51.837 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:51.837 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:51.837 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.837 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:51.837 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.837 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:16:51.837 00:16:51.837 real 0m5.091s 00:16:51.837 user 0m7.755s 00:16:51.837 sys 0m1.517s 00:16:51.837 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:51.837 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:51.837 ************************************ 00:16:51.837 END TEST nvmf_nsid 00:16:51.837 ************************************ 00:16:51.837 17:15:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:51.837 ************************************ 00:16:51.837 END TEST nvmf_target_extra 00:16:51.837 ************************************ 00:16:51.837 00:16:51.837 real 7m22.839s 00:16:51.837 user 17m45.240s 00:16:51.837 sys 1m30.096s 00:16:51.837 17:15:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:51.837 17:15:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:51.837 17:15:44 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:16:51.837 17:15:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:51.837 17:15:44 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:51.837 17:15:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:51.837 ************************************ 00:16:51.837 START TEST nvmf_host 00:16:51.837 ************************************ 00:16:51.837 17:15:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:16:51.837 * Looking for test storage... 00:16:52.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:52.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.097 --rc genhtml_branch_coverage=1 00:16:52.097 --rc genhtml_function_coverage=1 00:16:52.097 --rc genhtml_legend=1 00:16:52.097 --rc geninfo_all_blocks=1 00:16:52.097 --rc geninfo_unexecuted_blocks=1 00:16:52.097 00:16:52.097 ' 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:52.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.097 --rc genhtml_branch_coverage=1 00:16:52.097 --rc genhtml_function_coverage=1 00:16:52.097 --rc genhtml_legend=1 00:16:52.097 --rc geninfo_all_blocks=1 00:16:52.097 --rc geninfo_unexecuted_blocks=1 00:16:52.097 00:16:52.097 ' 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:52.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.097 --rc genhtml_branch_coverage=1 00:16:52.097 --rc genhtml_function_coverage=1 00:16:52.097 --rc genhtml_legend=1 00:16:52.097 --rc geninfo_all_blocks=1 00:16:52.097 --rc geninfo_unexecuted_blocks=1 00:16:52.097 00:16:52.097 ' 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:52.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.097 --rc genhtml_branch_coverage=1 00:16:52.097 --rc genhtml_function_coverage=1 00:16:52.097 --rc genhtml_legend=1 00:16:52.097 --rc geninfo_all_blocks=1 00:16:52.097 --rc geninfo_unexecuted_blocks=1 00:16:52.097 00:16:52.097 ' 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:52.097 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:16:52.097 17:15:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:16:52.098 17:15:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:52.098 17:15:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:52.098 17:15:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:52.098 17:15:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.098 ************************************ 00:16:52.098 START TEST nvmf_multicontroller 00:16:52.098 ************************************ 00:16:52.098 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:52.098 * Looking for test storage... 00:16:52.098 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:52.098 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:52.098 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:16:52.098 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:52.357 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:52.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.358 --rc genhtml_branch_coverage=1 00:16:52.358 --rc genhtml_function_coverage=1 00:16:52.358 --rc genhtml_legend=1 00:16:52.358 --rc geninfo_all_blocks=1 00:16:52.358 --rc geninfo_unexecuted_blocks=1 00:16:52.358 00:16:52.358 ' 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:52.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.358 --rc genhtml_branch_coverage=1 00:16:52.358 --rc genhtml_function_coverage=1 00:16:52.358 --rc genhtml_legend=1 00:16:52.358 --rc geninfo_all_blocks=1 00:16:52.358 --rc geninfo_unexecuted_blocks=1 00:16:52.358 00:16:52.358 ' 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:52.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.358 --rc genhtml_branch_coverage=1 00:16:52.358 --rc genhtml_function_coverage=1 00:16:52.358 --rc genhtml_legend=1 00:16:52.358 --rc geninfo_all_blocks=1 00:16:52.358 --rc geninfo_unexecuted_blocks=1 00:16:52.358 00:16:52.358 ' 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:52.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.358 --rc genhtml_branch_coverage=1 00:16:52.358 --rc genhtml_function_coverage=1 00:16:52.358 --rc genhtml_legend=1 00:16:52.358 --rc geninfo_all_blocks=1 00:16:52.358 --rc geninfo_unexecuted_blocks=1 00:16:52.358 00:16:52.358 ' 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:52.358 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:52.358 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:52.359 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:52.359 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:52.359 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:52.359 Cannot find device "nvmf_init_br" 00:16:52.359 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:16:52.359 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:52.359 Cannot find device "nvmf_init_br2" 00:16:52.359 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:16:52.359 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:52.359 Cannot find device "nvmf_tgt_br" 00:16:52.359 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # true 00:16:52.359 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:52.359 Cannot find device "nvmf_tgt_br2" 00:16:52.359 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # true 00:16:52.359 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:52.359 Cannot find device "nvmf_init_br" 00:16:52.359 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # true 00:16:52.359 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:52.359 Cannot find device "nvmf_init_br2" 00:16:52.359 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # true 00:16:52.359 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:52.359 Cannot find device "nvmf_tgt_br" 00:16:52.359 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # true 00:16:52.359 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:52.359 Cannot find device "nvmf_tgt_br2" 00:16:52.359 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # true 00:16:52.359 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:52.359 Cannot find device "nvmf_br" 00:16:52.359 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # true 00:16:52.359 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:52.359 Cannot find device "nvmf_init_if" 00:16:52.618 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # true 00:16:52.618 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:52.618 Cannot find device "nvmf_init_if2" 00:16:52.618 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # true 00:16:52.618 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:52.618 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:52.618 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # true 00:16:52.618 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:52.618 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:52.618 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # true 00:16:52.618 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:52.618 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:52.618 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:52.618 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:52.618 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:52.618 17:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:52.618 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:52.618 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:52.618 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:52.618 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:52.618 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:52.618 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:52.618 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:52.618 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:52.618 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:52.618 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:52.618 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:52.618 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:52.618 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:52.618 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:52.618 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:52.618 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:52.618 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:52.618 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:52.618 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:52.878 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:52.878 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:16:52.878 00:16:52.878 --- 10.0.0.3 ping statistics --- 00:16:52.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.878 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:52.878 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:52.878 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:16:52.878 00:16:52.878 --- 10.0.0.4 ping statistics --- 00:16:52.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.878 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:52.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:52.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:16:52.878 00:16:52.878 --- 10.0.0.1 ping statistics --- 00:16:52.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.878 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:52.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:52.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:16:52.878 00:16:52.878 --- 10.0.0.2 ping statistics --- 00:16:52.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.878 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@461 -- # return 0 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=86364 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 86364 00:16:52.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 86364 ']' 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:52.878 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:52.878 [2024-11-19 17:15:45.297239] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:16:52.878 [2024-11-19 17:15:45.297336] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.137 [2024-11-19 17:15:45.451435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:53.137 [2024-11-19 17:15:45.518502] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.137 [2024-11-19 17:15:45.518570] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.137 [2024-11-19 17:15:45.518584] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:53.137 [2024-11-19 17:15:45.518595] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:53.137 [2024-11-19 17:15:45.518605] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.137 [2024-11-19 17:15:45.520410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.137 [2024-11-19 17:15:45.520272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:53.137 [2024-11-19 17:15:45.520400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:53.397 [2024-11-19 17:15:45.741419] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:53.397 Malloc0 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:53.397 [2024-11-19 17:15:45.816376] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:53.397 [2024-11-19 17:15:45.824190] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:53.397 Malloc1 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4421 00:16:53.397 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.398 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:53.398 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.398 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=86408 00:16:53.398 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:16:53.398 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:53.398 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 86408 /var/tmp/bdevperf.sock 00:16:53.398 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 86408 ']' 00:16:53.398 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:53.398 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:53.398 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:53.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:53.398 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:53.398 17:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:53.966 NVMe0n1 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.966 1 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:53.966 2024/11/19 17:15:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 hostnqn:nqn.2021-09-7.io.spdk:00001 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:53.966 request: 00:16:53.966 { 00:16:53.966 "method": "bdev_nvme_attach_controller", 00:16:53.966 "params": { 00:16:53.966 "name": "NVMe0", 00:16:53.966 "trtype": "tcp", 00:16:53.966 "traddr": "10.0.0.3", 00:16:53.966 "adrfam": "ipv4", 00:16:53.966 "trsvcid": "4420", 00:16:53.966 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:53.966 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:16:53.966 "hostaddr": "10.0.0.1", 00:16:53.966 "prchk_reftag": false, 00:16:53.966 "prchk_guard": false, 00:16:53.966 "hdgst": false, 00:16:53.966 "ddgst": false, 00:16:53.966 "allow_unrecognized_csi": false 00:16:53.966 } 00:16:53.966 } 00:16:53.966 Got JSON-RPC error response 00:16:53.966 GoRPCClient: error on JSON-RPC call 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:53.966 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:53.967 2024/11/19 17:15:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:53.967 request: 00:16:53.967 { 00:16:53.967 "method": "bdev_nvme_attach_controller", 00:16:53.967 "params": { 00:16:53.967 "name": "NVMe0", 00:16:53.967 "trtype": "tcp", 00:16:53.967 "traddr": "10.0.0.3", 00:16:53.967 "adrfam": "ipv4", 00:16:53.967 "trsvcid": "4420", 00:16:53.967 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:53.967 "hostaddr": "10.0.0.1", 00:16:53.967 "prchk_reftag": false, 00:16:53.967 "prchk_guard": false, 00:16:53.967 "hdgst": false, 00:16:53.967 "ddgst": false, 00:16:53.967 "allow_unrecognized_csi": false 00:16:53.967 } 00:16:53.967 } 00:16:53.967 Got JSON-RPC error response 00:16:53.967 GoRPCClient: error on JSON-RPC call 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:53.967 2024/11/19 17:15:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:16:53.967 request: 00:16:53.967 { 00:16:53.967 "method": "bdev_nvme_attach_controller", 00:16:53.967 "params": { 00:16:53.967 "name": "NVMe0", 00:16:53.967 "trtype": "tcp", 00:16:53.967 "traddr": "10.0.0.3", 00:16:53.967 "adrfam": "ipv4", 00:16:53.967 "trsvcid": "4420", 00:16:53.967 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:53.967 "hostaddr": "10.0.0.1", 00:16:53.967 "prchk_reftag": false, 00:16:53.967 "prchk_guard": false, 00:16:53.967 "hdgst": false, 00:16:53.967 "ddgst": false, 00:16:53.967 "multipath": "disable", 00:16:53.967 "allow_unrecognized_csi": false 00:16:53.967 } 00:16:53.967 } 00:16:53.967 Got JSON-RPC error response 00:16:53.967 GoRPCClient: error on JSON-RPC call 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.967 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:53.967 2024/11/19 17:15:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:53.967 request: 00:16:53.967 { 00:16:53.967 "method": "bdev_nvme_attach_controller", 00:16:53.967 "params": { 00:16:53.967 "name": "NVMe0", 00:16:53.967 "trtype": "tcp", 00:16:53.967 "traddr": "10.0.0.3", 00:16:53.967 "adrfam": "ipv4", 00:16:53.967 "trsvcid": "4420", 00:16:53.967 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:53.967 "hostaddr": "10.0.0.1", 00:16:53.967 "prchk_reftag": false, 00:16:53.967 "prchk_guard": false, 00:16:53.968 "hdgst": false, 00:16:53.968 "ddgst": false, 00:16:53.968 "multipath": "failover", 00:16:53.968 "allow_unrecognized_csi": false 00:16:53.968 } 00:16:53.968 } 00:16:53.968 Got JSON-RPC error response 00:16:53.968 GoRPCClient: error on JSON-RPC call 00:16:53.968 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:53.968 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:16:53.968 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:53.968 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:53.968 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:53.968 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:53.968 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.968 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:54.226 NVMe0n1 00:16:54.226 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.226 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:54.226 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.226 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:54.226 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.226 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:16:54.226 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.226 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:54.226 00:16:54.226 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.226 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:16:54.226 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:54.226 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.226 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:54.226 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.226 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:16:54.226 17:15:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:55.603 { 00:16:55.603 "results": [ 00:16:55.603 { 00:16:55.603 "job": "NVMe0n1", 00:16:55.603 "core_mask": "0x1", 00:16:55.603 "workload": "write", 00:16:55.603 "status": "finished", 00:16:55.603 "queue_depth": 128, 00:16:55.603 "io_size": 4096, 00:16:55.603 "runtime": 1.003407, 00:16:55.603 "iops": 20314.78751892303, 00:16:55.603 "mibps": 79.35463874579308, 00:16:55.603 "io_failed": 0, 00:16:55.603 "io_timeout": 0, 00:16:55.603 "avg_latency_us": 6290.312019409162, 00:16:55.603 "min_latency_us": 3470.429090909091, 00:16:55.603 "max_latency_us": 14417.92 00:16:55.603 } 00:16:55.603 ], 00:16:55.603 "core_count": 1 00:16:55.603 } 00:16:55.603 17:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:16:55.603 17:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.603 17:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:55.603 17:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.603 17:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n 10.0.0.2 ]] 00:16:55.603 17:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:16:55.603 17:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.603 17:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:55.603 nvme1n1 00:16:55.603 17:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.603 17:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:16:55.603 17:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.603 17:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:55.603 17:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # jq -r '.[].peer_address.traddr' 00:16:55.603 17:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.603 17:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # [[ 10.0.0.1 == \1\0\.\0\.\0\.\1 ]] 00:16:55.603 17:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller nvme1 00:16:55.603 17:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.603 17:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:55.603 17:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.603 17:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@109 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 00:16:55.603 17:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.603 17:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:55.603 nvme1n1 00:16:55.603 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.603 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:16:55.603 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.603 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:55.603 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # jq -r '.[].peer_address.traddr' 00:16:55.603 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.603 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # [[ 10.0.0.2 == \1\0\.\0\.\0\.\2 ]] 00:16:55.603 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 86408 00:16:55.603 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 86408 ']' 00:16:55.603 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 86408 00:16:55.603 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:16:55.603 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.603 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86408 00:16:55.603 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:55.603 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:55.603 killing process with pid 86408 00:16:55.603 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86408' 00:16:55.603 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 86408 00:16:55.603 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 86408 00:16:55.862 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:55.862 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.862 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:55.862 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.862 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:55.862 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.862 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:55.862 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.862 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:55.862 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:55.862 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:16:55.862 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:16:55.862 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:16:55.862 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:16:56.122 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:16:56.122 [2024-11-19 17:15:45.959089] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:16:56.122 [2024-11-19 17:15:45.959230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86408 ] 00:16:56.122 [2024-11-19 17:15:46.112846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.122 [2024-11-19 17:15:46.180332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.122 [2024-11-19 17:15:46.631019] bdev.c:4700:bdev_name_add: *ERROR*: Bdev name bf477ef8-0d35-4e3d-8cf9-2f6e932eeecb already exists 00:16:56.122 [2024-11-19 17:15:46.631097] bdev.c:7838:bdev_register: *ERROR*: Unable to add uuid:bf477ef8-0d35-4e3d-8cf9-2f6e932eeecb alias for bdev NVMe1n1 00:16:56.122 [2024-11-19 17:15:46.631134] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:16:56.122 Running I/O for 1 seconds... 00:16:56.122 20256.00 IOPS, 79.12 MiB/s 00:16:56.122 Latency(us) 00:16:56.122 [2024-11-19T17:15:48.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.122 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:16:56.122 NVMe0n1 : 1.00 20314.79 79.35 0.00 0.00 6290.31 3470.43 14417.92 00:16:56.122 [2024-11-19T17:15:48.655Z] =================================================================================================================== 00:16:56.122 [2024-11-19T17:15:48.655Z] Total : 20314.79 79.35 0.00 0.00 6290.31 3470.43 14417.92 00:16:56.122 Received shutdown signal, test time was about 1.000000 seconds 00:16:56.122 00:16:56.122 Latency(us) 00:16:56.122 [2024-11-19T17:15:48.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.122 [2024-11-19T17:15:48.655Z] =================================================================================================================== 00:16:56.122 [2024-11-19T17:15:48.655Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:56.122 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:16:56.122 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:56.122 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:16:56.122 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:16:56.122 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:56.122 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:16:56.122 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:56.122 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:16:56.122 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:56.122 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:56.122 rmmod nvme_tcp 00:16:56.122 rmmod nvme_fabrics 00:16:56.122 rmmod nvme_keyring 00:16:56.122 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:56.122 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:16:56.122 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:16:56.122 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 86364 ']' 00:16:56.122 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 86364 00:16:56.122 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 86364 ']' 00:16:56.122 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 86364 00:16:56.122 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:16:56.122 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:56.122 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86364 00:16:56.122 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:56.122 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:56.122 killing process with pid 86364 00:16:56.122 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86364' 00:16:56.122 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 86364 00:16:56.122 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 86364 00:16:56.381 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:56.381 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:56.381 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:56.381 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:16:56.381 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:16:56.381 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:56.381 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:16:56.381 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:56.381 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:56.381 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:56.381 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:56.381 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:56.381 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:56.381 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:56.381 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:56.382 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:56.382 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:56.382 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:56.640 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:56.640 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:56.640 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:56.640 17:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:56.640 17:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:56.640 17:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.640 17:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:56.640 17:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.640 17:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@300 -- # return 0 00:16:56.640 00:16:56.640 real 0m4.521s 00:16:56.640 user 0m12.606s 00:16:56.640 sys 0m1.315s 00:16:56.640 17:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.640 17:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.640 ************************************ 00:16:56.640 END TEST nvmf_multicontroller 00:16:56.640 ************************************ 00:16:56.640 17:15:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:56.640 17:15:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:56.640 17:15:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.640 17:15:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.640 ************************************ 00:16:56.640 START TEST nvmf_aer 00:16:56.640 ************************************ 00:16:56.640 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:56.899 * Looking for test storage... 00:16:56.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:56.899 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:56.899 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:16:56.899 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:56.899 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:56.899 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:56.899 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:56.899 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:56.899 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:16:56.899 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:16:56.899 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:16:56.899 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:16:56.899 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:16:56.899 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:16:56.899 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:16:56.899 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:56.899 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:16:56.899 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:56.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.900 --rc genhtml_branch_coverage=1 00:16:56.900 --rc genhtml_function_coverage=1 00:16:56.900 --rc genhtml_legend=1 00:16:56.900 --rc geninfo_all_blocks=1 00:16:56.900 --rc geninfo_unexecuted_blocks=1 00:16:56.900 00:16:56.900 ' 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:56.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.900 --rc genhtml_branch_coverage=1 00:16:56.900 --rc genhtml_function_coverage=1 00:16:56.900 --rc genhtml_legend=1 00:16:56.900 --rc geninfo_all_blocks=1 00:16:56.900 --rc geninfo_unexecuted_blocks=1 00:16:56.900 00:16:56.900 ' 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:56.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.900 --rc genhtml_branch_coverage=1 00:16:56.900 --rc genhtml_function_coverage=1 00:16:56.900 --rc genhtml_legend=1 00:16:56.900 --rc geninfo_all_blocks=1 00:16:56.900 --rc geninfo_unexecuted_blocks=1 00:16:56.900 00:16:56.900 ' 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:56.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.900 --rc genhtml_branch_coverage=1 00:16:56.900 --rc genhtml_function_coverage=1 00:16:56.900 --rc genhtml_legend=1 00:16:56.900 --rc geninfo_all_blocks=1 00:16:56.900 --rc geninfo_unexecuted_blocks=1 00:16:56.900 00:16:56.900 ' 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:56.900 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:56.900 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:56.901 Cannot find device "nvmf_init_br" 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:56.901 Cannot find device "nvmf_init_br2" 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:56.901 Cannot find device "nvmf_tgt_br" 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # true 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:56.901 Cannot find device "nvmf_tgt_br2" 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # true 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:56.901 Cannot find device "nvmf_init_br" 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # true 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:56.901 Cannot find device "nvmf_init_br2" 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # true 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:56.901 Cannot find device "nvmf_tgt_br" 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # true 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:56.901 Cannot find device "nvmf_tgt_br2" 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # true 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:56.901 Cannot find device "nvmf_br" 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # true 00:16:56.901 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:57.160 Cannot find device "nvmf_init_if" 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # true 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:57.160 Cannot find device "nvmf_init_if2" 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # true 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:57.160 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # true 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:57.160 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # true 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:57.160 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:57.160 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.120 ms 00:16:57.160 00:16:57.160 --- 10.0.0.3 ping statistics --- 00:16:57.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.160 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:16:57.160 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:57.419 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:57.419 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:16:57.419 00:16:57.419 --- 10.0.0.4 ping statistics --- 00:16:57.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.419 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:16:57.419 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:57.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:57.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:16:57.419 00:16:57.419 --- 10.0.0.1 ping statistics --- 00:16:57.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.419 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:16:57.419 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:57.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:57.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:16:57.419 00:16:57.419 --- 10.0.0.2 ping statistics --- 00:16:57.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.420 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:16:57.420 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:57.420 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@461 -- # return 0 00:16:57.420 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:57.420 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:57.420 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:57.420 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:57.420 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:57.420 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:57.420 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:57.420 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:16:57.420 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:57.420 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:57.420 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:57.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.420 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=86706 00:16:57.420 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 86706 00:16:57.420 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 86706 ']' 00:16:57.420 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:57.420 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.420 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.420 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.420 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.420 17:15:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:57.420 [2024-11-19 17:15:49.799558] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:16:57.420 [2024-11-19 17:15:49.799970] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.690 [2024-11-19 17:15:49.958199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:57.690 [2024-11-19 17:15:50.022665] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.690 [2024-11-19 17:15:50.023091] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.690 [2024-11-19 17:15:50.023242] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:57.690 [2024-11-19 17:15:50.023259] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:57.690 [2024-11-19 17:15:50.023270] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.690 [2024-11-19 17:15:50.024808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.690 [2024-11-19 17:15:50.025825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.690 [2024-11-19 17:15:50.026003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:57.690 [2024-11-19 17:15:50.026016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.641 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:58.641 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:16:58.641 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:58.641 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:58.641 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:58.641 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:58.642 [2024-11-19 17:15:50.866587] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:58.642 Malloc0 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:58.642 [2024-11-19 17:15:50.939613] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:58.642 [ 00:16:58.642 { 00:16:58.642 "allow_any_host": true, 00:16:58.642 "hosts": [], 00:16:58.642 "listen_addresses": [], 00:16:58.642 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:58.642 "subtype": "Discovery" 00:16:58.642 }, 00:16:58.642 { 00:16:58.642 "allow_any_host": true, 00:16:58.642 "hosts": [], 00:16:58.642 "listen_addresses": [ 00:16:58.642 { 00:16:58.642 "adrfam": "IPv4", 00:16:58.642 "traddr": "10.0.0.3", 00:16:58.642 "trsvcid": "4420", 00:16:58.642 "trtype": "TCP" 00:16:58.642 } 00:16:58.642 ], 00:16:58.642 "max_cntlid": 65519, 00:16:58.642 "max_namespaces": 2, 00:16:58.642 "min_cntlid": 1, 00:16:58.642 "model_number": "SPDK bdev Controller", 00:16:58.642 "namespaces": [ 00:16:58.642 { 00:16:58.642 "bdev_name": "Malloc0", 00:16:58.642 "name": "Malloc0", 00:16:58.642 "nguid": "D811F7B5C512431F953FA7CFF3907225", 00:16:58.642 "nsid": 1, 00:16:58.642 "uuid": "d811f7b5-c512-431f-953f-a7cff3907225" 00:16:58.642 } 00:16:58.642 ], 00:16:58.642 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:58.642 "serial_number": "SPDK00000000000001", 00:16:58.642 "subtype": "NVMe" 00:16:58.642 } 00:16:58.642 ] 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=86760 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:16:58.642 17:15:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:16:58.642 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:58.642 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:16:58.642 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:16:58.642 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:58.901 Malloc1 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:58.901 Asynchronous Event Request test 00:16:58.901 Attaching to 10.0.0.3 00:16:58.901 Attached to 10.0.0.3 00:16:58.901 Registering asynchronous event callbacks... 00:16:58.901 Starting namespace attribute notice tests for all controllers... 00:16:58.901 10.0.0.3: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:58.901 aer_cb - Changed Namespace 00:16:58.901 Cleaning up... 00:16:58.901 [ 00:16:58.901 { 00:16:58.901 "allow_any_host": true, 00:16:58.901 "hosts": [], 00:16:58.901 "listen_addresses": [], 00:16:58.901 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:58.901 "subtype": "Discovery" 00:16:58.901 }, 00:16:58.901 { 00:16:58.901 "allow_any_host": true, 00:16:58.901 "hosts": [], 00:16:58.901 "listen_addresses": [ 00:16:58.901 { 00:16:58.901 "adrfam": "IPv4", 00:16:58.901 "traddr": "10.0.0.3", 00:16:58.901 "trsvcid": "4420", 00:16:58.901 "trtype": "TCP" 00:16:58.901 } 00:16:58.901 ], 00:16:58.901 "max_cntlid": 65519, 00:16:58.901 "max_namespaces": 2, 00:16:58.901 "min_cntlid": 1, 00:16:58.901 "model_number": "SPDK bdev Controller", 00:16:58.901 "namespaces": [ 00:16:58.901 { 00:16:58.901 "bdev_name": "Malloc0", 00:16:58.901 "name": "Malloc0", 00:16:58.901 "nguid": "D811F7B5C512431F953FA7CFF3907225", 00:16:58.901 "nsid": 1, 00:16:58.901 "uuid": "d811f7b5-c512-431f-953f-a7cff3907225" 00:16:58.901 }, 00:16:58.901 { 00:16:58.901 "bdev_name": "Malloc1", 00:16:58.901 "name": "Malloc1", 00:16:58.901 "nguid": "ABC190EF4C844CDAB0CD2F4811391BC7", 00:16:58.901 "nsid": 2, 00:16:58.901 "uuid": "abc190ef-4c84-4cda-b0cd-2f4811391bc7" 00:16:58.901 } 00:16:58.901 ], 00:16:58.901 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:58.901 "serial_number": "SPDK00000000000001", 00:16:58.901 "subtype": "NVMe" 00:16:58.901 } 00:16:58.901 ] 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 86760 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:58.901 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:16:59.160 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:59.160 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:16:59.160 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:59.160 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:59.160 rmmod nvme_tcp 00:16:59.160 rmmod nvme_fabrics 00:16:59.160 rmmod nvme_keyring 00:16:59.160 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:59.160 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:16:59.160 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:16:59.160 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 86706 ']' 00:16:59.160 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 86706 00:16:59.160 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 86706 ']' 00:16:59.160 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 86706 00:16:59.160 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:16:59.160 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.160 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86706 00:16:59.160 killing process with pid 86706 00:16:59.160 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:59.160 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:59.160 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86706' 00:16:59.160 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 86706 00:16:59.160 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 86706 00:16:59.419 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:59.419 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:59.419 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:59.419 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:16:59.419 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:16:59.419 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:59.419 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:16:59.419 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:59.419 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:59.419 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:59.419 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:59.419 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:59.419 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:59.419 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:59.419 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:59.419 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:59.419 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:59.419 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:59.419 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:59.419 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:59.419 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:59.419 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:59.677 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:59.677 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.677 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:59.677 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.677 17:15:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@300 -- # return 0 00:16:59.677 00:16:59.677 real 0m2.899s 00:16:59.677 user 0m7.234s 00:16:59.677 sys 0m0.912s 00:16:59.677 17:15:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.677 17:15:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:59.677 ************************************ 00:16:59.677 END TEST nvmf_aer 00:16:59.677 ************************************ 00:16:59.677 17:15:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:16:59.677 17:15:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:59.677 17:15:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:59.677 17:15:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.677 ************************************ 00:16:59.677 START TEST nvmf_async_init 00:16:59.677 ************************************ 00:16:59.677 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:16:59.677 * Looking for test storage... 00:16:59.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:59.677 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:59.677 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:16:59.677 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:59.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.936 --rc genhtml_branch_coverage=1 00:16:59.936 --rc genhtml_function_coverage=1 00:16:59.936 --rc genhtml_legend=1 00:16:59.936 --rc geninfo_all_blocks=1 00:16:59.936 --rc geninfo_unexecuted_blocks=1 00:16:59.936 00:16:59.936 ' 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:59.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.936 --rc genhtml_branch_coverage=1 00:16:59.936 --rc genhtml_function_coverage=1 00:16:59.936 --rc genhtml_legend=1 00:16:59.936 --rc geninfo_all_blocks=1 00:16:59.936 --rc geninfo_unexecuted_blocks=1 00:16:59.936 00:16:59.936 ' 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:59.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.936 --rc genhtml_branch_coverage=1 00:16:59.936 --rc genhtml_function_coverage=1 00:16:59.936 --rc genhtml_legend=1 00:16:59.936 --rc geninfo_all_blocks=1 00:16:59.936 --rc geninfo_unexecuted_blocks=1 00:16:59.936 00:16:59.936 ' 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:59.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.936 --rc genhtml_branch_coverage=1 00:16:59.936 --rc genhtml_function_coverage=1 00:16:59.936 --rc genhtml_legend=1 00:16:59.936 --rc geninfo_all_blocks=1 00:16:59.936 --rc geninfo_unexecuted_blocks=1 00:16:59.936 00:16:59.936 ' 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.936 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:59.937 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=0e75291fa9f8458387eb3f20f51dd9f5 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:59.937 Cannot find device "nvmf_init_br" 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:59.937 Cannot find device "nvmf_init_br2" 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:59.937 Cannot find device "nvmf_tgt_br" 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # true 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:59.937 Cannot find device "nvmf_tgt_br2" 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # true 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:59.937 Cannot find device "nvmf_init_br" 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # true 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:59.937 Cannot find device "nvmf_init_br2" 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # true 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:59.937 Cannot find device "nvmf_tgt_br" 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # true 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:59.937 Cannot find device "nvmf_tgt_br2" 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # true 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:59.937 Cannot find device "nvmf_br" 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # true 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:59.937 Cannot find device "nvmf_init_if" 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # true 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:59.937 Cannot find device "nvmf_init_if2" 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # true 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:59.937 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # true 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:59.937 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # true 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:59.937 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:00.196 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:00.196 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:00.196 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:00.196 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:00.196 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:00.196 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:00.196 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:00.196 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:00.196 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:00.196 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:00.196 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:00.196 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:00.196 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:00.196 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:00.196 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:00.196 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:00.196 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:00.196 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:00.196 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:00.196 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:00.196 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:00.196 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:00.196 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:00.196 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:00.196 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:00.196 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:00.196 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:00.196 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:17:00.196 00:17:00.196 --- 10.0.0.3 ping statistics --- 00:17:00.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.196 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:00.196 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:00.197 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:00.197 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.080 ms 00:17:00.197 00:17:00.197 --- 10.0.0.4 ping statistics --- 00:17:00.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.197 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:17:00.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:00.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:00.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:17:00.197 00:17:00.197 --- 10.0.0.1 ping statistics --- 00:17:00.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.197 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:17:00.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:00.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:00.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:17:00.197 00:17:00.197 --- 10.0.0.2 ping statistics --- 00:17:00.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.197 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:17:00.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:00.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@461 -- # return 0 00:17:00.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:00.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:00.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:00.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:00.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:00.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:00.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:00.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:17:00.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:00.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:00.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:00.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=86986 00:17:00.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 86986 00:17:00.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:00.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 86986 ']' 00:17:00.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.197 17:15:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:00.456 [2024-11-19 17:15:52.741906] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:17:00.456 [2024-11-19 17:15:52.742007] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.456 [2024-11-19 17:15:52.896401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.456 [2024-11-19 17:15:52.943129] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:00.456 [2024-11-19 17:15:52.943199] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:00.456 [2024-11-19 17:15:52.943210] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:00.456 [2024-11-19 17:15:52.943218] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:00.456 [2024-11-19 17:15:52.943224] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:00.456 [2024-11-19 17:15:52.943629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:00.715 [2024-11-19 17:15:53.155464] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:00.715 null0 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 0e75291fa9f8458387eb3f20f51dd9f5 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:00.715 [2024-11-19 17:15:53.195650] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.715 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:00.974 nvme0n1 00:17:00.974 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.974 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:00.974 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.974 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:00.974 [ 00:17:00.974 { 00:17:00.974 "aliases": [ 00:17:00.974 "0e75291f-a9f8-4583-87eb-3f20f51dd9f5" 00:17:00.974 ], 00:17:00.974 "assigned_rate_limits": { 00:17:00.974 "r_mbytes_per_sec": 0, 00:17:00.974 "rw_ios_per_sec": 0, 00:17:00.974 "rw_mbytes_per_sec": 0, 00:17:00.974 "w_mbytes_per_sec": 0 00:17:00.974 }, 00:17:00.974 "block_size": 512, 00:17:00.974 "claimed": false, 00:17:00.974 "driver_specific": { 00:17:00.974 "mp_policy": "active_passive", 00:17:00.974 "nvme": [ 00:17:00.974 { 00:17:00.974 "ctrlr_data": { 00:17:00.974 "ana_reporting": false, 00:17:00.974 "cntlid": 1, 00:17:00.974 "firmware_revision": "25.01", 00:17:00.974 "model_number": "SPDK bdev Controller", 00:17:00.974 "multi_ctrlr": true, 00:17:00.974 "oacs": { 00:17:00.974 "firmware": 0, 00:17:00.974 "format": 0, 00:17:00.974 "ns_manage": 0, 00:17:00.974 "security": 0 00:17:00.974 }, 00:17:00.974 "serial_number": "00000000000000000000", 00:17:00.974 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:00.974 "vendor_id": "0x8086" 00:17:00.974 }, 00:17:00.974 "ns_data": { 00:17:00.974 "can_share": true, 00:17:00.974 "id": 1 00:17:00.974 }, 00:17:00.974 "trid": { 00:17:00.974 "adrfam": "IPv4", 00:17:00.974 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:00.974 "traddr": "10.0.0.3", 00:17:00.974 "trsvcid": "4420", 00:17:00.974 "trtype": "TCP" 00:17:00.974 }, 00:17:00.974 "vs": { 00:17:00.974 "nvme_version": "1.3" 00:17:00.974 } 00:17:00.974 } 00:17:00.974 ] 00:17:00.974 }, 00:17:00.974 "memory_domains": [ 00:17:00.974 { 00:17:00.974 "dma_device_id": "system", 00:17:00.974 "dma_device_type": 1 00:17:00.974 } 00:17:00.974 ], 00:17:00.974 "name": "nvme0n1", 00:17:00.974 "num_blocks": 2097152, 00:17:00.974 "numa_id": -1, 00:17:00.974 "product_name": "NVMe disk", 00:17:00.974 "supported_io_types": { 00:17:00.974 "abort": true, 00:17:00.974 "compare": true, 00:17:00.974 "compare_and_write": true, 00:17:00.974 "copy": true, 00:17:00.974 "flush": true, 00:17:00.974 "get_zone_info": false, 00:17:00.974 "nvme_admin": true, 00:17:00.974 "nvme_io": true, 00:17:00.974 "nvme_io_md": false, 00:17:00.974 "nvme_iov_md": false, 00:17:00.974 "read": true, 00:17:00.974 "reset": true, 00:17:00.974 "seek_data": false, 00:17:00.974 "seek_hole": false, 00:17:00.974 "unmap": false, 00:17:00.974 "write": true, 00:17:00.974 "write_zeroes": true, 00:17:00.974 "zcopy": false, 00:17:00.974 "zone_append": false, 00:17:00.974 "zone_management": false 00:17:00.974 }, 00:17:00.974 "uuid": "0e75291f-a9f8-4583-87eb-3f20f51dd9f5", 00:17:00.974 "zoned": false 00:17:00.974 } 00:17:00.974 ] 00:17:00.974 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.974 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:17:00.974 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.974 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:00.974 [2024-11-19 17:15:53.464140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:17:00.974 [2024-11-19 17:15:53.464237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e9080 (9): Bad file descriptor 00:17:01.234 [2024-11-19 17:15:53.595861] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.234 [ 00:17:01.234 { 00:17:01.234 "aliases": [ 00:17:01.234 "0e75291f-a9f8-4583-87eb-3f20f51dd9f5" 00:17:01.234 ], 00:17:01.234 "assigned_rate_limits": { 00:17:01.234 "r_mbytes_per_sec": 0, 00:17:01.234 "rw_ios_per_sec": 0, 00:17:01.234 "rw_mbytes_per_sec": 0, 00:17:01.234 "w_mbytes_per_sec": 0 00:17:01.234 }, 00:17:01.234 "block_size": 512, 00:17:01.234 "claimed": false, 00:17:01.234 "driver_specific": { 00:17:01.234 "mp_policy": "active_passive", 00:17:01.234 "nvme": [ 00:17:01.234 { 00:17:01.234 "ctrlr_data": { 00:17:01.234 "ana_reporting": false, 00:17:01.234 "cntlid": 2, 00:17:01.234 "firmware_revision": "25.01", 00:17:01.234 "model_number": "SPDK bdev Controller", 00:17:01.234 "multi_ctrlr": true, 00:17:01.234 "oacs": { 00:17:01.234 "firmware": 0, 00:17:01.234 "format": 0, 00:17:01.234 "ns_manage": 0, 00:17:01.234 "security": 0 00:17:01.234 }, 00:17:01.234 "serial_number": "00000000000000000000", 00:17:01.234 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:01.234 "vendor_id": "0x8086" 00:17:01.234 }, 00:17:01.234 "ns_data": { 00:17:01.234 "can_share": true, 00:17:01.234 "id": 1 00:17:01.234 }, 00:17:01.234 "trid": { 00:17:01.234 "adrfam": "IPv4", 00:17:01.234 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:01.234 "traddr": "10.0.0.3", 00:17:01.234 "trsvcid": "4420", 00:17:01.234 "trtype": "TCP" 00:17:01.234 }, 00:17:01.234 "vs": { 00:17:01.234 "nvme_version": "1.3" 00:17:01.234 } 00:17:01.234 } 00:17:01.234 ] 00:17:01.234 }, 00:17:01.234 "memory_domains": [ 00:17:01.234 { 00:17:01.234 "dma_device_id": "system", 00:17:01.234 "dma_device_type": 1 00:17:01.234 } 00:17:01.234 ], 00:17:01.234 "name": "nvme0n1", 00:17:01.234 "num_blocks": 2097152, 00:17:01.234 "numa_id": -1, 00:17:01.234 "product_name": "NVMe disk", 00:17:01.234 "supported_io_types": { 00:17:01.234 "abort": true, 00:17:01.234 "compare": true, 00:17:01.234 "compare_and_write": true, 00:17:01.234 "copy": true, 00:17:01.234 "flush": true, 00:17:01.234 "get_zone_info": false, 00:17:01.234 "nvme_admin": true, 00:17:01.234 "nvme_io": true, 00:17:01.234 "nvme_io_md": false, 00:17:01.234 "nvme_iov_md": false, 00:17:01.234 "read": true, 00:17:01.234 "reset": true, 00:17:01.234 "seek_data": false, 00:17:01.234 "seek_hole": false, 00:17:01.234 "unmap": false, 00:17:01.234 "write": true, 00:17:01.234 "write_zeroes": true, 00:17:01.234 "zcopy": false, 00:17:01.234 "zone_append": false, 00:17:01.234 "zone_management": false 00:17:01.234 }, 00:17:01.234 "uuid": "0e75291f-a9f8-4583-87eb-3f20f51dd9f5", 00:17:01.234 "zoned": false 00:17:01.234 } 00:17:01.234 ] 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.aMWmWoZLGZ 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.aMWmWoZLGZ 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.aMWmWoZLGZ 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 --secure-channel 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.234 [2024-11-19 17:15:53.672297] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:01.234 [2024-11-19 17:15:53.672421] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:01.234 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.235 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.235 [2024-11-19 17:15:53.688313] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:01.235 nvme0n1 00:17:01.235 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.235 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:01.235 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.235 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.494 [ 00:17:01.494 { 00:17:01.494 "aliases": [ 00:17:01.494 "0e75291f-a9f8-4583-87eb-3f20f51dd9f5" 00:17:01.494 ], 00:17:01.494 "assigned_rate_limits": { 00:17:01.494 "r_mbytes_per_sec": 0, 00:17:01.494 "rw_ios_per_sec": 0, 00:17:01.494 "rw_mbytes_per_sec": 0, 00:17:01.494 "w_mbytes_per_sec": 0 00:17:01.494 }, 00:17:01.494 "block_size": 512, 00:17:01.494 "claimed": false, 00:17:01.494 "driver_specific": { 00:17:01.494 "mp_policy": "active_passive", 00:17:01.494 "nvme": [ 00:17:01.494 { 00:17:01.494 "ctrlr_data": { 00:17:01.494 "ana_reporting": false, 00:17:01.494 "cntlid": 3, 00:17:01.494 "firmware_revision": "25.01", 00:17:01.494 "model_number": "SPDK bdev Controller", 00:17:01.494 "multi_ctrlr": true, 00:17:01.494 "oacs": { 00:17:01.494 "firmware": 0, 00:17:01.494 "format": 0, 00:17:01.494 "ns_manage": 0, 00:17:01.494 "security": 0 00:17:01.494 }, 00:17:01.494 "serial_number": "00000000000000000000", 00:17:01.494 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:01.494 "vendor_id": "0x8086" 00:17:01.494 }, 00:17:01.494 "ns_data": { 00:17:01.494 "can_share": true, 00:17:01.494 "id": 1 00:17:01.494 }, 00:17:01.494 "trid": { 00:17:01.494 "adrfam": "IPv4", 00:17:01.494 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:01.494 "traddr": "10.0.0.3", 00:17:01.494 "trsvcid": "4421", 00:17:01.494 "trtype": "TCP" 00:17:01.494 }, 00:17:01.494 "vs": { 00:17:01.494 "nvme_version": "1.3" 00:17:01.494 } 00:17:01.494 } 00:17:01.494 ] 00:17:01.494 }, 00:17:01.494 "memory_domains": [ 00:17:01.494 { 00:17:01.494 "dma_device_id": "system", 00:17:01.494 "dma_device_type": 1 00:17:01.494 } 00:17:01.494 ], 00:17:01.494 "name": "nvme0n1", 00:17:01.494 "num_blocks": 2097152, 00:17:01.494 "numa_id": -1, 00:17:01.494 "product_name": "NVMe disk", 00:17:01.494 "supported_io_types": { 00:17:01.494 "abort": true, 00:17:01.494 "compare": true, 00:17:01.494 "compare_and_write": true, 00:17:01.494 "copy": true, 00:17:01.494 "flush": true, 00:17:01.494 "get_zone_info": false, 00:17:01.494 "nvme_admin": true, 00:17:01.494 "nvme_io": true, 00:17:01.494 "nvme_io_md": false, 00:17:01.494 "nvme_iov_md": false, 00:17:01.494 "read": true, 00:17:01.494 "reset": true, 00:17:01.494 "seek_data": false, 00:17:01.494 "seek_hole": false, 00:17:01.494 "unmap": false, 00:17:01.494 "write": true, 00:17:01.494 "write_zeroes": true, 00:17:01.494 "zcopy": false, 00:17:01.494 "zone_append": false, 00:17:01.494 "zone_management": false 00:17:01.494 }, 00:17:01.494 "uuid": "0e75291f-a9f8-4583-87eb-3f20f51dd9f5", 00:17:01.494 "zoned": false 00:17:01.494 } 00:17:01.494 ] 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.aMWmWoZLGZ 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:01.494 rmmod nvme_tcp 00:17:01.494 rmmod nvme_fabrics 00:17:01.494 rmmod nvme_keyring 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 86986 ']' 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 86986 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 86986 ']' 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 86986 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86986 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:01.494 killing process with pid 86986 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86986' 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 86986 00:17:01.494 17:15:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 86986 00:17:01.753 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:01.753 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:01.753 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:01.753 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:17:01.753 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:01.753 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:17:01.753 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:17:01.753 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:01.753 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:01.753 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:01.753 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:01.753 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:01.753 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:01.753 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:01.753 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:01.753 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:01.753 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:01.753 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:02.012 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:02.012 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:02.012 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:02.012 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:02.012 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:02.012 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.012 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:02.012 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.012 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@300 -- # return 0 00:17:02.012 00:17:02.012 real 0m2.391s 00:17:02.012 user 0m1.782s 00:17:02.012 sys 0m0.770s 00:17:02.012 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:02.012 ************************************ 00:17:02.012 END TEST nvmf_async_init 00:17:02.012 ************************************ 00:17:02.012 17:15:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:02.012 17:15:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:02.012 17:15:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:02.012 17:15:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:02.012 17:15:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.012 ************************************ 00:17:02.012 START TEST dma 00:17:02.012 ************************************ 00:17:02.012 17:15:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:02.271 * Looking for test storage... 00:17:02.271 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:02.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.271 --rc genhtml_branch_coverage=1 00:17:02.271 --rc genhtml_function_coverage=1 00:17:02.271 --rc genhtml_legend=1 00:17:02.271 --rc geninfo_all_blocks=1 00:17:02.271 --rc geninfo_unexecuted_blocks=1 00:17:02.271 00:17:02.271 ' 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:02.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.271 --rc genhtml_branch_coverage=1 00:17:02.271 --rc genhtml_function_coverage=1 00:17:02.271 --rc genhtml_legend=1 00:17:02.271 --rc geninfo_all_blocks=1 00:17:02.271 --rc geninfo_unexecuted_blocks=1 00:17:02.271 00:17:02.271 ' 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:02.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.271 --rc genhtml_branch_coverage=1 00:17:02.271 --rc genhtml_function_coverage=1 00:17:02.271 --rc genhtml_legend=1 00:17:02.271 --rc geninfo_all_blocks=1 00:17:02.271 --rc geninfo_unexecuted_blocks=1 00:17:02.271 00:17:02.271 ' 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:02.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.271 --rc genhtml_branch_coverage=1 00:17:02.271 --rc genhtml_function_coverage=1 00:17:02.271 --rc genhtml_legend=1 00:17:02.271 --rc geninfo_all_blocks=1 00:17:02.271 --rc geninfo_unexecuted_blocks=1 00:17:02.271 00:17:02.271 ' 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:02.271 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:17:02.272 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.272 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.272 17:15:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.272 17:15:54 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.272 17:15:54 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.272 17:15:54 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.272 17:15:54 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:17:02.272 17:15:54 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.272 17:15:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:17:02.272 17:15:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:02.272 17:15:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:02.272 17:15:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:02.272 17:15:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.272 17:15:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.272 17:15:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:02.272 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:02.272 17:15:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:02.272 17:15:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:02.272 17:15:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:02.272 17:15:54 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:17:02.272 17:15:54 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:17:02.272 00:17:02.272 real 0m0.199s 00:17:02.272 user 0m0.107s 00:17:02.272 sys 0m0.105s 00:17:02.272 17:15:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:02.272 17:15:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:17:02.272 ************************************ 00:17:02.272 END TEST dma 00:17:02.272 ************************************ 00:17:02.272 17:15:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:02.272 17:15:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:02.272 17:15:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:02.272 17:15:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.272 ************************************ 00:17:02.272 START TEST nvmf_identify 00:17:02.272 ************************************ 00:17:02.272 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:02.531 * Looking for test storage... 00:17:02.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:02.531 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:02.531 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:17:02.531 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:02.531 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:02.531 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:02.531 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:02.531 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:02.531 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:17:02.531 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:02.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.532 --rc genhtml_branch_coverage=1 00:17:02.532 --rc genhtml_function_coverage=1 00:17:02.532 --rc genhtml_legend=1 00:17:02.532 --rc geninfo_all_blocks=1 00:17:02.532 --rc geninfo_unexecuted_blocks=1 00:17:02.532 00:17:02.532 ' 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:02.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.532 --rc genhtml_branch_coverage=1 00:17:02.532 --rc genhtml_function_coverage=1 00:17:02.532 --rc genhtml_legend=1 00:17:02.532 --rc geninfo_all_blocks=1 00:17:02.532 --rc geninfo_unexecuted_blocks=1 00:17:02.532 00:17:02.532 ' 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:02.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.532 --rc genhtml_branch_coverage=1 00:17:02.532 --rc genhtml_function_coverage=1 00:17:02.532 --rc genhtml_legend=1 00:17:02.532 --rc geninfo_all_blocks=1 00:17:02.532 --rc geninfo_unexecuted_blocks=1 00:17:02.532 00:17:02.532 ' 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:02.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.532 --rc genhtml_branch_coverage=1 00:17:02.532 --rc genhtml_function_coverage=1 00:17:02.532 --rc genhtml_legend=1 00:17:02.532 --rc geninfo_all_blocks=1 00:17:02.532 --rc geninfo_unexecuted_blocks=1 00:17:02.532 00:17:02.532 ' 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:02.532 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:02.533 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:02.533 Cannot find device "nvmf_init_br" 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:02.533 17:15:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:02.533 Cannot find device "nvmf_init_br2" 00:17:02.533 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:02.533 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:02.533 Cannot find device "nvmf_tgt_br" 00:17:02.533 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:17:02.533 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:02.533 Cannot find device "nvmf_tgt_br2" 00:17:02.533 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:17:02.533 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:02.533 Cannot find device "nvmf_init_br" 00:17:02.533 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:17:02.533 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:02.533 Cannot find device "nvmf_init_br2" 00:17:02.533 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:17:02.533 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:02.792 Cannot find device "nvmf_tgt_br" 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:02.792 Cannot find device "nvmf_tgt_br2" 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:02.792 Cannot find device "nvmf_br" 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:02.792 Cannot find device "nvmf_init_if" 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:02.792 Cannot find device "nvmf_init_if2" 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:02.792 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:02.792 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:02.792 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:03.051 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:03.051 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:03.051 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:03.051 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:03.051 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:03.051 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:03.051 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:03.051 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:03.051 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:03.051 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:17:03.051 00:17:03.051 --- 10.0.0.3 ping statistics --- 00:17:03.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.051 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:17:03.051 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:03.051 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:03.051 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:17:03.051 00:17:03.051 --- 10.0.0.4 ping statistics --- 00:17:03.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.051 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:17:03.051 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:03.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:17:03.051 00:17:03.051 --- 10.0.0.1 ping statistics --- 00:17:03.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.051 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:17:03.051 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:03.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:17:03.051 00:17:03.051 --- 10.0.0.2 ping statistics --- 00:17:03.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.051 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:17:03.051 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.051 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:17:03.051 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:03.051 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.051 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:03.051 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:03.051 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.051 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:03.051 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:03.051 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:03.051 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:03.051 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:03.051 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=87302 00:17:03.052 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:03.052 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:03.052 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 87302 00:17:03.052 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 87302 ']' 00:17:03.052 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.052 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.052 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.052 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.052 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:03.052 [2024-11-19 17:15:55.472455] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:17:03.052 [2024-11-19 17:15:55.472544] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.310 [2024-11-19 17:15:55.629041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:03.310 [2024-11-19 17:15:55.692018] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.310 [2024-11-19 17:15:55.692095] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.310 [2024-11-19 17:15:55.692109] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:03.310 [2024-11-19 17:15:55.692120] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:03.310 [2024-11-19 17:15:55.692130] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.310 [2024-11-19 17:15:55.693637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.310 [2024-11-19 17:15:55.693786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.310 [2024-11-19 17:15:55.693913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:03.310 [2024-11-19 17:15:55.693927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.570 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.570 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:17:03.570 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:03.570 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.570 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:03.570 [2024-11-19 17:15:55.863419] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:03.570 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.570 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:03.570 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:03.570 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:03.570 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:03.570 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.570 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:03.570 Malloc0 00:17:03.570 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.570 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:03.570 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.570 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:03.570 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.570 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:03.570 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.570 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:03.570 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.570 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:03.570 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.570 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:03.570 [2024-11-19 17:15:55.989119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:03.570 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.570 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:03.570 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.570 17:15:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:03.570 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.570 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:03.570 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.570 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:03.570 [ 00:17:03.570 { 00:17:03.570 "allow_any_host": true, 00:17:03.570 "hosts": [], 00:17:03.570 "listen_addresses": [ 00:17:03.570 { 00:17:03.570 "adrfam": "IPv4", 00:17:03.570 "traddr": "10.0.0.3", 00:17:03.570 "trsvcid": "4420", 00:17:03.570 "trtype": "TCP" 00:17:03.570 } 00:17:03.570 ], 00:17:03.570 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:03.570 "subtype": "Discovery" 00:17:03.570 }, 00:17:03.570 { 00:17:03.570 "allow_any_host": true, 00:17:03.570 "hosts": [], 00:17:03.570 "listen_addresses": [ 00:17:03.570 { 00:17:03.570 "adrfam": "IPv4", 00:17:03.570 "traddr": "10.0.0.3", 00:17:03.570 "trsvcid": "4420", 00:17:03.570 "trtype": "TCP" 00:17:03.570 } 00:17:03.570 ], 00:17:03.570 "max_cntlid": 65519, 00:17:03.570 "max_namespaces": 32, 00:17:03.570 "min_cntlid": 1, 00:17:03.570 "model_number": "SPDK bdev Controller", 00:17:03.570 "namespaces": [ 00:17:03.570 { 00:17:03.570 "bdev_name": "Malloc0", 00:17:03.570 "eui64": "ABCDEF0123456789", 00:17:03.570 "name": "Malloc0", 00:17:03.570 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:03.570 "nsid": 1, 00:17:03.570 "uuid": "ad1ad11e-ccbb-414a-ab78-4d766d407952" 00:17:03.570 } 00:17:03.570 ], 00:17:03.570 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:03.570 "serial_number": "SPDK00000000000001", 00:17:03.570 "subtype": "NVMe" 00:17:03.570 } 00:17:03.570 ] 00:17:03.570 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.570 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:03.570 [2024-11-19 17:15:56.042689] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:17:03.570 [2024-11-19 17:15:56.042912] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87342 ] 00:17:03.833 [2024-11-19 17:15:56.196284] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:17:03.833 [2024-11-19 17:15:56.196351] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:03.833 [2024-11-19 17:15:56.196368] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:03.833 [2024-11-19 17:15:56.196379] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:03.833 [2024-11-19 17:15:56.196391] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:03.833 [2024-11-19 17:15:56.196822] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:17:03.833 [2024-11-19 17:15:56.196896] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1377d90 0 00:17:03.833 [2024-11-19 17:15:56.202763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:03.833 [2024-11-19 17:15:56.202785] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:03.833 [2024-11-19 17:15:56.202792] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:03.833 [2024-11-19 17:15:56.202795] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:03.833 [2024-11-19 17:15:56.202830] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.833 [2024-11-19 17:15:56.202837] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.833 [2024-11-19 17:15:56.202841] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1377d90) 00:17:03.833 [2024-11-19 17:15:56.202856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:03.833 [2024-11-19 17:15:56.202901] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8600, cid 0, qid 0 00:17:03.833 [2024-11-19 17:15:56.210764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.833 [2024-11-19 17:15:56.210782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.833 [2024-11-19 17:15:56.210787] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.833 [2024-11-19 17:15:56.210792] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8600) on tqpair=0x1377d90 00:17:03.833 [2024-11-19 17:15:56.210804] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:03.833 [2024-11-19 17:15:56.210812] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:17:03.833 [2024-11-19 17:15:56.210818] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:17:03.833 [2024-11-19 17:15:56.210835] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.833 [2024-11-19 17:15:56.210839] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.833 [2024-11-19 17:15:56.210843] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1377d90) 00:17:03.833 [2024-11-19 17:15:56.210851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.833 [2024-11-19 17:15:56.210890] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8600, cid 0, qid 0 00:17:03.833 [2024-11-19 17:15:56.210977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.833 [2024-11-19 17:15:56.210983] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.833 [2024-11-19 17:15:56.210986] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.833 [2024-11-19 17:15:56.210990] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8600) on tqpair=0x1377d90 00:17:03.833 [2024-11-19 17:15:56.210996] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:17:03.833 [2024-11-19 17:15:56.211003] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:17:03.833 [2024-11-19 17:15:56.211011] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.833 [2024-11-19 17:15:56.211015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.833 [2024-11-19 17:15:56.211018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1377d90) 00:17:03.833 [2024-11-19 17:15:56.211025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.833 [2024-11-19 17:15:56.211044] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8600, cid 0, qid 0 00:17:03.833 [2024-11-19 17:15:56.211115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.833 [2024-11-19 17:15:56.211121] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.833 [2024-11-19 17:15:56.211124] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.833 [2024-11-19 17:15:56.211128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8600) on tqpair=0x1377d90 00:17:03.833 [2024-11-19 17:15:56.211134] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:17:03.833 [2024-11-19 17:15:56.211143] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:03.833 [2024-11-19 17:15:56.211149] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.833 [2024-11-19 17:15:56.211153] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.833 [2024-11-19 17:15:56.211157] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1377d90) 00:17:03.833 [2024-11-19 17:15:56.211163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.833 [2024-11-19 17:15:56.211181] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8600, cid 0, qid 0 00:17:03.833 [2024-11-19 17:15:56.211232] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.833 [2024-11-19 17:15:56.211238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.833 [2024-11-19 17:15:56.211241] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.833 [2024-11-19 17:15:56.211245] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8600) on tqpair=0x1377d90 00:17:03.833 [2024-11-19 17:15:56.211250] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:03.833 [2024-11-19 17:15:56.211259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.833 [2024-11-19 17:15:56.211264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.833 [2024-11-19 17:15:56.211267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1377d90) 00:17:03.834 [2024-11-19 17:15:56.211274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.834 [2024-11-19 17:15:56.211290] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8600, cid 0, qid 0 00:17:03.834 [2024-11-19 17:15:56.211341] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.834 [2024-11-19 17:15:56.211347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.834 [2024-11-19 17:15:56.211350] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.834 [2024-11-19 17:15:56.211353] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8600) on tqpair=0x1377d90 00:17:03.834 [2024-11-19 17:15:56.211358] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:03.834 [2024-11-19 17:15:56.211363] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:03.834 [2024-11-19 17:15:56.211370] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:03.834 [2024-11-19 17:15:56.211482] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:17:03.834 [2024-11-19 17:15:56.211494] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:03.834 [2024-11-19 17:15:56.211505] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.834 [2024-11-19 17:15:56.211509] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.834 [2024-11-19 17:15:56.211512] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1377d90) 00:17:03.834 [2024-11-19 17:15:56.211519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.834 [2024-11-19 17:15:56.211539] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8600, cid 0, qid 0 00:17:03.834 [2024-11-19 17:15:56.211624] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.834 [2024-11-19 17:15:56.211635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.834 [2024-11-19 17:15:56.211639] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.834 [2024-11-19 17:15:56.211642] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8600) on tqpair=0x1377d90 00:17:03.834 [2024-11-19 17:15:56.211648] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:03.834 [2024-11-19 17:15:56.211657] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.834 [2024-11-19 17:15:56.211662] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.834 [2024-11-19 17:15:56.211665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1377d90) 00:17:03.834 [2024-11-19 17:15:56.211672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.834 [2024-11-19 17:15:56.211689] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8600, cid 0, qid 0 00:17:03.834 [2024-11-19 17:15:56.211758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.834 [2024-11-19 17:15:56.211765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.834 [2024-11-19 17:15:56.211769] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.834 [2024-11-19 17:15:56.211772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8600) on tqpair=0x1377d90 00:17:03.834 [2024-11-19 17:15:56.211777] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:03.834 [2024-11-19 17:15:56.211782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:03.834 [2024-11-19 17:15:56.211790] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:17:03.834 [2024-11-19 17:15:56.211807] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:03.834 [2024-11-19 17:15:56.211819] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.834 [2024-11-19 17:15:56.211823] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1377d90) 00:17:03.834 [2024-11-19 17:15:56.211830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.834 [2024-11-19 17:15:56.211851] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8600, cid 0, qid 0 00:17:03.834 [2024-11-19 17:15:56.211987] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:03.834 [2024-11-19 17:15:56.212000] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:03.834 [2024-11-19 17:15:56.212004] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:03.834 [2024-11-19 17:15:56.212008] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1377d90): datao=0, datal=4096, cccid=0 00:17:03.834 [2024-11-19 17:15:56.212012] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13b8600) on tqpair(0x1377d90): expected_datao=0, payload_size=4096 00:17:03.834 [2024-11-19 17:15:56.212017] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.834 [2024-11-19 17:15:56.212025] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:03.834 [2024-11-19 17:15:56.212030] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:03.834 [2024-11-19 17:15:56.212038] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.834 [2024-11-19 17:15:56.212043] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.834 [2024-11-19 17:15:56.212047] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.834 [2024-11-19 17:15:56.212050] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8600) on tqpair=0x1377d90 00:17:03.834 [2024-11-19 17:15:56.212059] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:17:03.834 [2024-11-19 17:15:56.212064] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:17:03.834 [2024-11-19 17:15:56.212069] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:17:03.834 [2024-11-19 17:15:56.212074] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:17:03.834 [2024-11-19 17:15:56.212078] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:17:03.834 [2024-11-19 17:15:56.212083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:17:03.834 [2024-11-19 17:15:56.212096] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:03.834 [2024-11-19 17:15:56.212104] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.834 [2024-11-19 17:15:56.212108] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.834 [2024-11-19 17:15:56.212112] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1377d90) 00:17:03.834 [2024-11-19 17:15:56.212119] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:03.834 [2024-11-19 17:15:56.212140] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8600, cid 0, qid 0 00:17:03.834 [2024-11-19 17:15:56.212208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.834 [2024-11-19 17:15:56.212214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.834 [2024-11-19 17:15:56.212217] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.834 [2024-11-19 17:15:56.212221] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8600) on tqpair=0x1377d90 00:17:03.834 [2024-11-19 17:15:56.212229] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.834 [2024-11-19 17:15:56.212233] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.834 [2024-11-19 17:15:56.212237] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1377d90) 00:17:03.834 [2024-11-19 17:15:56.212243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.834 [2024-11-19 17:15:56.212249] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.834 [2024-11-19 17:15:56.212253] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.834 [2024-11-19 17:15:56.212256] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1377d90) 00:17:03.834 [2024-11-19 17:15:56.212261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.834 [2024-11-19 17:15:56.212267] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.834 [2024-11-19 17:15:56.212270] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.834 [2024-11-19 17:15:56.212273] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1377d90) 00:17:03.834 [2024-11-19 17:15:56.212278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.834 [2024-11-19 17:15:56.212284] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.834 [2024-11-19 17:15:56.212290] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.834 [2024-11-19 17:15:56.212293] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.834 [2024-11-19 17:15:56.212298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.834 [2024-11-19 17:15:56.212303] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:03.834 [2024-11-19 17:15:56.212319] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:03.834 [2024-11-19 17:15:56.212326] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.834 [2024-11-19 17:15:56.212329] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1377d90) 00:17:03.834 [2024-11-19 17:15:56.212336] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.834 [2024-11-19 17:15:56.212356] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8600, cid 0, qid 0 00:17:03.834 [2024-11-19 17:15:56.212363] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8780, cid 1, qid 0 00:17:03.834 [2024-11-19 17:15:56.212367] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8900, cid 2, qid 0 00:17:03.834 [2024-11-19 17:15:56.212372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.834 [2024-11-19 17:15:56.212376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8c00, cid 4, qid 0 00:17:03.834 [2024-11-19 17:15:56.212482] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.834 [2024-11-19 17:15:56.212488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.834 [2024-11-19 17:15:56.212491] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.834 [2024-11-19 17:15:56.212495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8c00) on tqpair=0x1377d90 00:17:03.835 [2024-11-19 17:15:56.212500] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:17:03.835 [2024-11-19 17:15:56.212506] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:17:03.835 [2024-11-19 17:15:56.212516] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.835 [2024-11-19 17:15:56.212521] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1377d90) 00:17:03.835 [2024-11-19 17:15:56.212527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.835 [2024-11-19 17:15:56.212546] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8c00, cid 4, qid 0 00:17:03.835 [2024-11-19 17:15:56.212609] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:03.835 [2024-11-19 17:15:56.212620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:03.835 [2024-11-19 17:15:56.212624] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:03.835 [2024-11-19 17:15:56.212627] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1377d90): datao=0, datal=4096, cccid=4 00:17:03.835 [2024-11-19 17:15:56.212632] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13b8c00) on tqpair(0x1377d90): expected_datao=0, payload_size=4096 00:17:03.835 [2024-11-19 17:15:56.212636] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.835 [2024-11-19 17:15:56.212643] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:03.835 [2024-11-19 17:15:56.212646] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:03.835 [2024-11-19 17:15:56.212654] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.835 [2024-11-19 17:15:56.212659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.835 [2024-11-19 17:15:56.212663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.835 [2024-11-19 17:15:56.212666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8c00) on tqpair=0x1377d90 00:17:03.835 [2024-11-19 17:15:56.212679] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:17:03.835 [2024-11-19 17:15:56.212713] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.835 [2024-11-19 17:15:56.212718] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1377d90) 00:17:03.835 [2024-11-19 17:15:56.212725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.835 [2024-11-19 17:15:56.212732] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.835 [2024-11-19 17:15:56.212736] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.835 [2024-11-19 17:15:56.212751] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1377d90) 00:17:03.835 [2024-11-19 17:15:56.212757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.835 [2024-11-19 17:15:56.212785] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8c00, cid 4, qid 0 00:17:03.835 [2024-11-19 17:15:56.212793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8d80, cid 5, qid 0 00:17:03.835 [2024-11-19 17:15:56.212920] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:03.835 [2024-11-19 17:15:56.212927] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:03.835 [2024-11-19 17:15:56.212930] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:03.835 [2024-11-19 17:15:56.212934] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1377d90): datao=0, datal=1024, cccid=4 00:17:03.835 [2024-11-19 17:15:56.212940] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13b8c00) on tqpair(0x1377d90): expected_datao=0, payload_size=1024 00:17:03.835 [2024-11-19 17:15:56.212944] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.835 [2024-11-19 17:15:56.212950] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:03.835 [2024-11-19 17:15:56.212953] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:03.835 [2024-11-19 17:15:56.212958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.835 [2024-11-19 17:15:56.212963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.835 [2024-11-19 17:15:56.212967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.835 [2024-11-19 17:15:56.212970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8d80) on tqpair=0x1377d90 00:17:03.835 [2024-11-19 17:15:56.253857] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.835 [2024-11-19 17:15:56.253874] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.835 [2024-11-19 17:15:56.253879] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.835 [2024-11-19 17:15:56.253883] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8c00) on tqpair=0x1377d90 00:17:03.835 [2024-11-19 17:15:56.253897] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.835 [2024-11-19 17:15:56.253902] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1377d90) 00:17:03.835 [2024-11-19 17:15:56.253910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.835 [2024-11-19 17:15:56.253939] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8c00, cid 4, qid 0 00:17:03.835 [2024-11-19 17:15:56.254027] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:03.835 [2024-11-19 17:15:56.254033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:03.835 [2024-11-19 17:15:56.254037] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:03.835 [2024-11-19 17:15:56.254040] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1377d90): datao=0, datal=3072, cccid=4 00:17:03.835 [2024-11-19 17:15:56.254045] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13b8c00) on tqpair(0x1377d90): expected_datao=0, payload_size=3072 00:17:03.835 [2024-11-19 17:15:56.254049] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.835 [2024-11-19 17:15:56.254056] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:03.835 [2024-11-19 17:15:56.254059] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:03.835 [2024-11-19 17:15:56.254067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.835 [2024-11-19 17:15:56.254072] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.835 [2024-11-19 17:15:56.254075] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.835 [2024-11-19 17:15:56.254079] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8c00) on tqpair=0x1377d90 00:17:03.835 [2024-11-19 17:15:56.254088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.835 [2024-11-19 17:15:56.254093] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1377d90) 00:17:03.835 [2024-11-19 17:15:56.254099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.835 [2024-11-19 17:15:56.254122] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8c00, cid 4, qid 0 00:17:03.835 [2024-11-19 17:15:56.254199] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:03.835 [2024-11-19 17:15:56.254205] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:03.835 [2024-11-19 17:15:56.254208] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:03.835 [2024-11-19 17:15:56.254211] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1377d90): datao=0, datal=8, cccid=4 00:17:03.835 [2024-11-19 17:15:56.254216] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13b8c00) on tqpair(0x1377d90): expected_datao=0, payload_size=8 00:17:03.835 [2024-11-19 17:15:56.254220] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.835 [2024-11-19 17:15:56.254226] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:03.835 [2024-11-19 17:15:56.254230] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:03.835 ===================================================== 00:17:03.835 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:03.835 ===================================================== 00:17:03.835 Controller Capabilities/Features 00:17:03.835 ================================ 00:17:03.835 Vendor ID: 0000 00:17:03.835 Subsystem Vendor ID: 0000 00:17:03.835 Serial Number: .................... 00:17:03.835 Model Number: ........................................ 00:17:03.835 Firmware Version: 25.01 00:17:03.835 Recommended Arb Burst: 0 00:17:03.835 IEEE OUI Identifier: 00 00 00 00:17:03.835 Multi-path I/O 00:17:03.835 May have multiple subsystem ports: No 00:17:03.835 May have multiple controllers: No 00:17:03.835 Associated with SR-IOV VF: No 00:17:03.835 Max Data Transfer Size: 131072 00:17:03.835 Max Number of Namespaces: 0 00:17:03.835 Max Number of I/O Queues: 1024 00:17:03.835 NVMe Specification Version (VS): 1.3 00:17:03.835 NVMe Specification Version (Identify): 1.3 00:17:03.835 Maximum Queue Entries: 128 00:17:03.835 Contiguous Queues Required: Yes 00:17:03.835 Arbitration Mechanisms Supported 00:17:03.835 Weighted Round Robin: Not Supported 00:17:03.835 Vendor Specific: Not Supported 00:17:03.835 Reset Timeout: 15000 ms 00:17:03.835 Doorbell Stride: 4 bytes 00:17:03.835 NVM Subsystem Reset: Not Supported 00:17:03.835 Command Sets Supported 00:17:03.835 NVM Command Set: Supported 00:17:03.835 Boot Partition: Not Supported 00:17:03.835 Memory Page Size Minimum: 4096 bytes 00:17:03.835 Memory Page Size Maximum: 4096 bytes 00:17:03.835 Persistent Memory Region: Not Supported 00:17:03.835 Optional Asynchronous Events Supported 00:17:03.835 Namespace Attribute Notices: Not Supported 00:17:03.835 Firmware Activation Notices: Not Supported 00:17:03.835 ANA Change Notices: Not Supported 00:17:03.835 PLE Aggregate Log Change Notices: Not Supported 00:17:03.835 LBA Status Info Alert Notices: Not Supported 00:17:03.835 EGE Aggregate Log Change Notices: Not Supported 00:17:03.835 Normal NVM Subsystem Shutdown event: Not Supported 00:17:03.835 Zone Descriptor Change Notices: Not Supported 00:17:03.835 Discovery Log Change Notices: Supported 00:17:03.835 Controller Attributes 00:17:03.835 128-bit Host Identifier: Not Supported 00:17:03.835 Non-Operational Permissive Mode: Not Supported 00:17:03.835 NVM Sets: Not Supported 00:17:03.835 Read Recovery Levels: Not Supported 00:17:03.835 Endurance Groups: Not Supported 00:17:03.835 Predictable Latency Mode: Not Supported 00:17:03.835 Traffic Based Keep ALive: Not Supported 00:17:03.835 Namespace Granularity: Not Supported 00:17:03.835 SQ Associations: Not Supported 00:17:03.836 UUID List: Not Supported 00:17:03.836 Multi-Domain Subsystem: Not Supported 00:17:03.836 Fixed Capacity Management: Not Supported 00:17:03.836 Variable Capacity Management: Not Supported 00:17:03.836 Delete Endurance Group: Not Supported 00:17:03.836 Delete NVM Set: Not Supported 00:17:03.836 Extended LBA Formats Supported: Not Supported 00:17:03.836 Flexible Data Placement Supported: Not Supported 00:17:03.836 00:17:03.836 Controller Memory Buffer Support 00:17:03.836 ================================ 00:17:03.836 Supported: No 00:17:03.836 00:17:03.836 Persistent Memory Region Support 00:17:03.836 ================================ 00:17:03.836 Supported: No 00:17:03.836 00:17:03.836 Admin Command Set Attributes 00:17:03.836 ============================ 00:17:03.836 Security Send/Receive: Not Supported 00:17:03.836 Format NVM: Not Supported 00:17:03.836 Firmware Activate/Download: Not Supported 00:17:03.836 Namespace Management: Not Supported 00:17:03.836 Device Self-Test: Not Supported 00:17:03.836 Directives: Not Supported 00:17:03.836 NVMe-MI: Not Supported 00:17:03.836 Virtualization Management: Not Supported 00:17:03.836 Doorbell Buffer Config: Not Supported 00:17:03.836 Get LBA Status Capability: Not Supported 00:17:03.836 Command & Feature Lockdown Capability: Not Supported 00:17:03.836 Abort Command Limit: 1 00:17:03.836 Async Event Request Limit: 4 00:17:03.836 Number of Firmware Slots: N/A 00:17:03.836 Firmware Slot 1 Read-Only: N/A 00:17:03.836 Firm[2024-11-19 17:15:56.298776] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.836 [2024-11-19 17:15:56.298795] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.836 [2024-11-19 17:15:56.298799] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.836 [2024-11-19 17:15:56.298803] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8c00) on tqpair=0x1377d90 00:17:03.836 ware Activation Without Reset: N/A 00:17:03.836 Multiple Update Detection Support: N/A 00:17:03.836 Firmware Update Granularity: No Information Provided 00:17:03.836 Per-Namespace SMART Log: No 00:17:03.836 Asymmetric Namespace Access Log Page: Not Supported 00:17:03.836 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:03.836 Command Effects Log Page: Not Supported 00:17:03.836 Get Log Page Extended Data: Supported 00:17:03.836 Telemetry Log Pages: Not Supported 00:17:03.836 Persistent Event Log Pages: Not Supported 00:17:03.836 Supported Log Pages Log Page: May Support 00:17:03.836 Commands Supported & Effects Log Page: Not Supported 00:17:03.836 Feature Identifiers & Effects Log Page:May Support 00:17:03.836 NVMe-MI Commands & Effects Log Page: May Support 00:17:03.836 Data Area 4 for Telemetry Log: Not Supported 00:17:03.836 Error Log Page Entries Supported: 128 00:17:03.836 Keep Alive: Not Supported 00:17:03.836 00:17:03.836 NVM Command Set Attributes 00:17:03.836 ========================== 00:17:03.836 Submission Queue Entry Size 00:17:03.836 Max: 1 00:17:03.836 Min: 1 00:17:03.836 Completion Queue Entry Size 00:17:03.836 Max: 1 00:17:03.836 Min: 1 00:17:03.836 Number of Namespaces: 0 00:17:03.836 Compare Command: Not Supported 00:17:03.836 Write Uncorrectable Command: Not Supported 00:17:03.836 Dataset Management Command: Not Supported 00:17:03.836 Write Zeroes Command: Not Supported 00:17:03.836 Set Features Save Field: Not Supported 00:17:03.836 Reservations: Not Supported 00:17:03.836 Timestamp: Not Supported 00:17:03.836 Copy: Not Supported 00:17:03.836 Volatile Write Cache: Not Present 00:17:03.836 Atomic Write Unit (Normal): 1 00:17:03.836 Atomic Write Unit (PFail): 1 00:17:03.836 Atomic Compare & Write Unit: 1 00:17:03.836 Fused Compare & Write: Supported 00:17:03.836 Scatter-Gather List 00:17:03.836 SGL Command Set: Supported 00:17:03.836 SGL Keyed: Supported 00:17:03.836 SGL Bit Bucket Descriptor: Not Supported 00:17:03.836 SGL Metadata Pointer: Not Supported 00:17:03.836 Oversized SGL: Not Supported 00:17:03.836 SGL Metadata Address: Not Supported 00:17:03.836 SGL Offset: Supported 00:17:03.836 Transport SGL Data Block: Not Supported 00:17:03.836 Replay Protected Memory Block: Not Supported 00:17:03.836 00:17:03.836 Firmware Slot Information 00:17:03.836 ========================= 00:17:03.836 Active slot: 0 00:17:03.836 00:17:03.836 00:17:03.836 Error Log 00:17:03.836 ========= 00:17:03.836 00:17:03.836 Active Namespaces 00:17:03.836 ================= 00:17:03.836 Discovery Log Page 00:17:03.836 ================== 00:17:03.836 Generation Counter: 2 00:17:03.836 Number of Records: 2 00:17:03.836 Record Format: 0 00:17:03.836 00:17:03.836 Discovery Log Entry 0 00:17:03.836 ---------------------- 00:17:03.836 Transport Type: 3 (TCP) 00:17:03.836 Address Family: 1 (IPv4) 00:17:03.836 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:03.836 Entry Flags: 00:17:03.836 Duplicate Returned Information: 1 00:17:03.836 Explicit Persistent Connection Support for Discovery: 1 00:17:03.836 Transport Requirements: 00:17:03.836 Secure Channel: Not Required 00:17:03.836 Port ID: 0 (0x0000) 00:17:03.836 Controller ID: 65535 (0xffff) 00:17:03.836 Admin Max SQ Size: 128 00:17:03.836 Transport Service Identifier: 4420 00:17:03.836 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:03.836 Transport Address: 10.0.0.3 00:17:03.836 Discovery Log Entry 1 00:17:03.836 ---------------------- 00:17:03.836 Transport Type: 3 (TCP) 00:17:03.836 Address Family: 1 (IPv4) 00:17:03.836 Subsystem Type: 2 (NVM Subsystem) 00:17:03.836 Entry Flags: 00:17:03.836 Duplicate Returned Information: 0 00:17:03.836 Explicit Persistent Connection Support for Discovery: 0 00:17:03.836 Transport Requirements: 00:17:03.836 Secure Channel: Not Required 00:17:03.836 Port ID: 0 (0x0000) 00:17:03.836 Controller ID: 65535 (0xffff) 00:17:03.836 Admin Max SQ Size: 128 00:17:03.836 Transport Service Identifier: 4420 00:17:03.836 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:03.836 Transport Address: 10.0.0.3 [2024-11-19 17:15:56.298899] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:17:03.836 [2024-11-19 17:15:56.298912] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8600) on tqpair=0x1377d90 00:17:03.836 [2024-11-19 17:15:56.298919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.836 [2024-11-19 17:15:56.298925] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8780) on tqpair=0x1377d90 00:17:03.836 [2024-11-19 17:15:56.298929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.836 [2024-11-19 17:15:56.298934] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8900) on tqpair=0x1377d90 00:17:03.836 [2024-11-19 17:15:56.298938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.836 [2024-11-19 17:15:56.298943] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.836 [2024-11-19 17:15:56.298947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.836 [2024-11-19 17:15:56.298956] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.836 [2024-11-19 17:15:56.298961] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.836 [2024-11-19 17:15:56.298964] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.836 [2024-11-19 17:15:56.298972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.836 [2024-11-19 17:15:56.298996] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.836 [2024-11-19 17:15:56.299100] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.836 [2024-11-19 17:15:56.299106] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.836 [2024-11-19 17:15:56.299109] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.836 [2024-11-19 17:15:56.299113] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.836 [2024-11-19 17:15:56.299121] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.836 [2024-11-19 17:15:56.299125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.836 [2024-11-19 17:15:56.299128] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.836 [2024-11-19 17:15:56.299135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.836 [2024-11-19 17:15:56.299156] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.836 [2024-11-19 17:15:56.299244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.836 [2024-11-19 17:15:56.299250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.836 [2024-11-19 17:15:56.299253] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.836 [2024-11-19 17:15:56.299257] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.836 [2024-11-19 17:15:56.299262] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:17:03.836 [2024-11-19 17:15:56.299267] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:17:03.837 [2024-11-19 17:15:56.299276] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.299280] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.299283] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.837 [2024-11-19 17:15:56.299290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.837 [2024-11-19 17:15:56.299308] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.837 [2024-11-19 17:15:56.299363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.837 [2024-11-19 17:15:56.299369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.837 [2024-11-19 17:15:56.299372] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.299376] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.837 [2024-11-19 17:15:56.299386] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.299390] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.299394] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.837 [2024-11-19 17:15:56.299400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.837 [2024-11-19 17:15:56.299417] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.837 [2024-11-19 17:15:56.299474] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.837 [2024-11-19 17:15:56.299479] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.837 [2024-11-19 17:15:56.299483] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.299486] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.837 [2024-11-19 17:15:56.299495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.299500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.299503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.837 [2024-11-19 17:15:56.299509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.837 [2024-11-19 17:15:56.299526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.837 [2024-11-19 17:15:56.299582] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.837 [2024-11-19 17:15:56.299588] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.837 [2024-11-19 17:15:56.299592] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.299595] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.837 [2024-11-19 17:15:56.299604] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.299609] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.299612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.837 [2024-11-19 17:15:56.299618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.837 [2024-11-19 17:15:56.299635] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.837 [2024-11-19 17:15:56.299689] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.837 [2024-11-19 17:15:56.299695] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.837 [2024-11-19 17:15:56.299698] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.299702] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.837 [2024-11-19 17:15:56.299711] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.299717] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.299720] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.837 [2024-11-19 17:15:56.299727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.837 [2024-11-19 17:15:56.299757] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.837 [2024-11-19 17:15:56.299830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.837 [2024-11-19 17:15:56.299836] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.837 [2024-11-19 17:15:56.299839] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.299843] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.837 [2024-11-19 17:15:56.299852] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.299857] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.299860] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.837 [2024-11-19 17:15:56.299867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.837 [2024-11-19 17:15:56.299888] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.837 [2024-11-19 17:15:56.299945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.837 [2024-11-19 17:15:56.299951] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.837 [2024-11-19 17:15:56.299981] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.299986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.837 [2024-11-19 17:15:56.299996] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.300001] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.300005] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.837 [2024-11-19 17:15:56.300011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.837 [2024-11-19 17:15:56.300030] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.837 [2024-11-19 17:15:56.300090] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.837 [2024-11-19 17:15:56.300096] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.837 [2024-11-19 17:15:56.300100] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.300104] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.837 [2024-11-19 17:15:56.300113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.300118] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.300121] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.837 [2024-11-19 17:15:56.300128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.837 [2024-11-19 17:15:56.300145] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.837 [2024-11-19 17:15:56.300199] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.837 [2024-11-19 17:15:56.300205] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.837 [2024-11-19 17:15:56.300208] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.300212] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.837 [2024-11-19 17:15:56.300221] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.300225] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.300229] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.837 [2024-11-19 17:15:56.300235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.837 [2024-11-19 17:15:56.300252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.837 [2024-11-19 17:15:56.300311] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.837 [2024-11-19 17:15:56.300335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.837 [2024-11-19 17:15:56.300339] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.300343] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.837 [2024-11-19 17:15:56.300353] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.300359] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.300362] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.837 [2024-11-19 17:15:56.300368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.837 [2024-11-19 17:15:56.300386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.837 [2024-11-19 17:15:56.300439] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.837 [2024-11-19 17:15:56.300445] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.837 [2024-11-19 17:15:56.300449] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.300453] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.837 [2024-11-19 17:15:56.300462] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.300467] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.300470] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.837 [2024-11-19 17:15:56.300477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.837 [2024-11-19 17:15:56.300493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.837 [2024-11-19 17:15:56.300548] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.837 [2024-11-19 17:15:56.300554] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.837 [2024-11-19 17:15:56.300557] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.300561] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.837 [2024-11-19 17:15:56.300570] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.300574] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.837 [2024-11-19 17:15:56.300578] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.837 [2024-11-19 17:15:56.300584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.838 [2024-11-19 17:15:56.300601] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.838 [2024-11-19 17:15:56.300653] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.838 [2024-11-19 17:15:56.300659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.838 [2024-11-19 17:15:56.300662] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.300666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.838 [2024-11-19 17:15:56.300675] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.300679] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.300683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.838 [2024-11-19 17:15:56.300689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.838 [2024-11-19 17:15:56.300705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.838 [2024-11-19 17:15:56.300779] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.838 [2024-11-19 17:15:56.300787] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.838 [2024-11-19 17:15:56.300790] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.300794] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.838 [2024-11-19 17:15:56.300803] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.300808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.300811] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.838 [2024-11-19 17:15:56.300818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.838 [2024-11-19 17:15:56.300837] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.838 [2024-11-19 17:15:56.300899] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.838 [2024-11-19 17:15:56.300906] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.838 [2024-11-19 17:15:56.300909] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.300913] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.838 [2024-11-19 17:15:56.300922] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.300926] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.300930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.838 [2024-11-19 17:15:56.300936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.838 [2024-11-19 17:15:56.300953] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.838 [2024-11-19 17:15:56.301012] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.838 [2024-11-19 17:15:56.301018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.838 [2024-11-19 17:15:56.301021] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.301025] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.838 [2024-11-19 17:15:56.301034] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.301038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.301041] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.838 [2024-11-19 17:15:56.301048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.838 [2024-11-19 17:15:56.301064] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.838 [2024-11-19 17:15:56.301119] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.838 [2024-11-19 17:15:56.301130] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.838 [2024-11-19 17:15:56.301134] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.301138] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.838 [2024-11-19 17:15:56.301148] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.301152] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.301156] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.838 [2024-11-19 17:15:56.301162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.838 [2024-11-19 17:15:56.301180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.838 [2024-11-19 17:15:56.301237] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.838 [2024-11-19 17:15:56.301243] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.838 [2024-11-19 17:15:56.301246] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.301250] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.838 [2024-11-19 17:15:56.301259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.301263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.301267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.838 [2024-11-19 17:15:56.301273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.838 [2024-11-19 17:15:56.301289] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.838 [2024-11-19 17:15:56.301345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.838 [2024-11-19 17:15:56.301355] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.838 [2024-11-19 17:15:56.301359] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.301363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.838 [2024-11-19 17:15:56.301372] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.301377] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.301380] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.838 [2024-11-19 17:15:56.301387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.838 [2024-11-19 17:15:56.301405] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.838 [2024-11-19 17:15:56.301460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.838 [2024-11-19 17:15:56.301466] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.838 [2024-11-19 17:15:56.301471] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.301474] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.838 [2024-11-19 17:15:56.301483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.301488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.301491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.838 [2024-11-19 17:15:56.301498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.838 [2024-11-19 17:15:56.301514] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.838 [2024-11-19 17:15:56.301570] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.838 [2024-11-19 17:15:56.301576] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.838 [2024-11-19 17:15:56.301579] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.301582] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.838 [2024-11-19 17:15:56.301591] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.301596] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.301599] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.838 [2024-11-19 17:15:56.301605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.838 [2024-11-19 17:15:56.301622] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.838 [2024-11-19 17:15:56.301681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.838 [2024-11-19 17:15:56.301687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.838 [2024-11-19 17:15:56.301691] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.301694] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.838 [2024-11-19 17:15:56.301703] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.301708] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.838 [2024-11-19 17:15:56.301711] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.838 [2024-11-19 17:15:56.301717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.838 [2024-11-19 17:15:56.301734] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.838 [2024-11-19 17:15:56.301805] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.839 [2024-11-19 17:15:56.301812] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.839 [2024-11-19 17:15:56.301815] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.301819] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.839 [2024-11-19 17:15:56.301828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.301832] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.301836] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.839 [2024-11-19 17:15:56.301842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.839 [2024-11-19 17:15:56.301861] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.839 [2024-11-19 17:15:56.301916] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.839 [2024-11-19 17:15:56.301922] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.839 [2024-11-19 17:15:56.301925] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.301928] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.839 [2024-11-19 17:15:56.301938] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.301943] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.301947] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.839 [2024-11-19 17:15:56.301953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.839 [2024-11-19 17:15:56.301970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.839 [2024-11-19 17:15:56.302023] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.839 [2024-11-19 17:15:56.302029] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.839 [2024-11-19 17:15:56.302032] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.302036] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.839 [2024-11-19 17:15:56.302046] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.302050] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.302053] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.839 [2024-11-19 17:15:56.302060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.839 [2024-11-19 17:15:56.302077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.839 [2024-11-19 17:15:56.302132] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.839 [2024-11-19 17:15:56.302138] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.839 [2024-11-19 17:15:56.302142] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.302145] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.839 [2024-11-19 17:15:56.302154] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.302158] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.302162] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.839 [2024-11-19 17:15:56.302168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.839 [2024-11-19 17:15:56.302185] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.839 [2024-11-19 17:15:56.302243] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.839 [2024-11-19 17:15:56.302249] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.839 [2024-11-19 17:15:56.302253] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.302256] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.839 [2024-11-19 17:15:56.302266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.302270] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.302273] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.839 [2024-11-19 17:15:56.302280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.839 [2024-11-19 17:15:56.302297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.839 [2024-11-19 17:15:56.302357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.839 [2024-11-19 17:15:56.302362] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.839 [2024-11-19 17:15:56.302366] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.302369] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.839 [2024-11-19 17:15:56.302379] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.302383] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.302387] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.839 [2024-11-19 17:15:56.302393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.839 [2024-11-19 17:15:56.302411] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.839 [2024-11-19 17:15:56.302468] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.839 [2024-11-19 17:15:56.302474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.839 [2024-11-19 17:15:56.302479] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.302482] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.839 [2024-11-19 17:15:56.302491] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.302496] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.302500] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.839 [2024-11-19 17:15:56.302507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.839 [2024-11-19 17:15:56.302523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.839 [2024-11-19 17:15:56.302578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.839 [2024-11-19 17:15:56.302584] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.839 [2024-11-19 17:15:56.302587] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.302591] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.839 [2024-11-19 17:15:56.302600] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.302604] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.302607] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.839 [2024-11-19 17:15:56.302614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.839 [2024-11-19 17:15:56.302631] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.839 [2024-11-19 17:15:56.302685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.839 [2024-11-19 17:15:56.302691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.839 [2024-11-19 17:15:56.302694] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.302698] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.839 [2024-11-19 17:15:56.302707] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.302711] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.302715] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.839 [2024-11-19 17:15:56.302722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.839 [2024-11-19 17:15:56.306750] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.839 [2024-11-19 17:15:56.306776] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.839 [2024-11-19 17:15:56.306783] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.839 [2024-11-19 17:15:56.306786] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.306790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.839 [2024-11-19 17:15:56.306803] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.306808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.306811] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1377d90) 00:17:03.839 [2024-11-19 17:15:56.306819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.839 [2024-11-19 17:15:56.306842] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13b8a80, cid 3, qid 0 00:17:03.839 [2024-11-19 17:15:56.306900] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.839 [2024-11-19 17:15:56.306907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.839 [2024-11-19 17:15:56.306910] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.839 [2024-11-19 17:15:56.306913] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13b8a80) on tqpair=0x1377d90 00:17:03.839 [2024-11-19 17:15:56.306921] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:17:03.839 00:17:03.839 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:03.839 [2024-11-19 17:15:56.349954] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:17:03.839 [2024-11-19 17:15:56.350027] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87348 ] 00:17:04.103 [2024-11-19 17:15:56.513615] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:17:04.103 [2024-11-19 17:15:56.513682] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:04.103 [2024-11-19 17:15:56.513688] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:04.103 [2024-11-19 17:15:56.513699] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:04.103 [2024-11-19 17:15:56.513710] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:04.103 [2024-11-19 17:15:56.513974] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:17:04.103 [2024-11-19 17:15:56.514049] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x4d6d90 0 00:17:04.103 [2024-11-19 17:15:56.520794] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:04.103 [2024-11-19 17:15:56.520811] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:04.103 [2024-11-19 17:15:56.520816] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:04.103 [2024-11-19 17:15:56.520819] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:04.103 [2024-11-19 17:15:56.520847] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.103 [2024-11-19 17:15:56.520854] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.103 [2024-11-19 17:15:56.520857] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4d6d90) 00:17:04.103 [2024-11-19 17:15:56.520869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:04.103 [2024-11-19 17:15:56.520901] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517600, cid 0, qid 0 00:17:04.103 [2024-11-19 17:15:56.528793] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.103 [2024-11-19 17:15:56.528813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.103 [2024-11-19 17:15:56.528818] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.103 [2024-11-19 17:15:56.528822] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517600) on tqpair=0x4d6d90 00:17:04.103 [2024-11-19 17:15:56.528833] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:04.103 [2024-11-19 17:15:56.528840] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:17:04.103 [2024-11-19 17:15:56.528846] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:17:04.103 [2024-11-19 17:15:56.528861] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.103 [2024-11-19 17:15:56.528865] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.103 [2024-11-19 17:15:56.528869] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4d6d90) 00:17:04.103 [2024-11-19 17:15:56.528878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.103 [2024-11-19 17:15:56.528905] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517600, cid 0, qid 0 00:17:04.103 [2024-11-19 17:15:56.528985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.103 [2024-11-19 17:15:56.528992] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.103 [2024-11-19 17:15:56.528995] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.103 [2024-11-19 17:15:56.528999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517600) on tqpair=0x4d6d90 00:17:04.104 [2024-11-19 17:15:56.529004] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:17:04.104 [2024-11-19 17:15:56.529011] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:17:04.104 [2024-11-19 17:15:56.529019] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.104 [2024-11-19 17:15:56.529022] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.104 [2024-11-19 17:15:56.529026] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4d6d90) 00:17:04.104 [2024-11-19 17:15:56.529033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.104 [2024-11-19 17:15:56.529067] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517600, cid 0, qid 0 00:17:04.104 [2024-11-19 17:15:56.529396] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.104 [2024-11-19 17:15:56.529410] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.104 [2024-11-19 17:15:56.529414] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.104 [2024-11-19 17:15:56.529418] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517600) on tqpair=0x4d6d90 00:17:04.104 [2024-11-19 17:15:56.529424] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:17:04.104 [2024-11-19 17:15:56.529432] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:04.104 [2024-11-19 17:15:56.529439] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.104 [2024-11-19 17:15:56.529443] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.104 [2024-11-19 17:15:56.529446] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4d6d90) 00:17:04.104 [2024-11-19 17:15:56.529454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.104 [2024-11-19 17:15:56.529473] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517600, cid 0, qid 0 00:17:04.104 [2024-11-19 17:15:56.529543] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.104 [2024-11-19 17:15:56.529550] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.104 [2024-11-19 17:15:56.529553] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.104 [2024-11-19 17:15:56.529556] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517600) on tqpair=0x4d6d90 00:17:04.104 [2024-11-19 17:15:56.529562] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:04.104 [2024-11-19 17:15:56.529572] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.104 [2024-11-19 17:15:56.529576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.104 [2024-11-19 17:15:56.529580] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4d6d90) 00:17:04.104 [2024-11-19 17:15:56.529586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.104 [2024-11-19 17:15:56.529604] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517600, cid 0, qid 0 00:17:04.104 [2024-11-19 17:15:56.529726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.104 [2024-11-19 17:15:56.529746] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.104 [2024-11-19 17:15:56.529751] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.104 [2024-11-19 17:15:56.529755] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517600) on tqpair=0x4d6d90 00:17:04.104 [2024-11-19 17:15:56.529759] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:04.104 [2024-11-19 17:15:56.529764] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:04.104 [2024-11-19 17:15:56.529773] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:04.104 [2024-11-19 17:15:56.529884] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:17:04.104 [2024-11-19 17:15:56.529890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:04.104 [2024-11-19 17:15:56.529899] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.104 [2024-11-19 17:15:56.529903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.104 [2024-11-19 17:15:56.529906] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4d6d90) 00:17:04.104 [2024-11-19 17:15:56.529913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.104 [2024-11-19 17:15:56.529934] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517600, cid 0, qid 0 00:17:04.104 [2024-11-19 17:15:56.530298] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.104 [2024-11-19 17:15:56.530313] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.104 [2024-11-19 17:15:56.530316] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.104 [2024-11-19 17:15:56.530320] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517600) on tqpair=0x4d6d90 00:17:04.104 [2024-11-19 17:15:56.530325] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:04.104 [2024-11-19 17:15:56.530335] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.104 [2024-11-19 17:15:56.530340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.104 [2024-11-19 17:15:56.530343] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4d6d90) 00:17:04.104 [2024-11-19 17:15:56.530350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.104 [2024-11-19 17:15:56.530369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517600, cid 0, qid 0 00:17:04.104 [2024-11-19 17:15:56.530433] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.104 [2024-11-19 17:15:56.530439] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.104 [2024-11-19 17:15:56.530442] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.104 [2024-11-19 17:15:56.530446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517600) on tqpair=0x4d6d90 00:17:04.104 [2024-11-19 17:15:56.530451] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:04.104 [2024-11-19 17:15:56.530455] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:04.104 [2024-11-19 17:15:56.530463] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:17:04.104 [2024-11-19 17:15:56.530478] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:04.104 [2024-11-19 17:15:56.530488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.104 [2024-11-19 17:15:56.530493] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4d6d90) 00:17:04.104 [2024-11-19 17:15:56.530500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.104 [2024-11-19 17:15:56.530519] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517600, cid 0, qid 0 00:17:04.104 [2024-11-19 17:15:56.530846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.104 [2024-11-19 17:15:56.530861] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.104 [2024-11-19 17:15:56.530864] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.104 [2024-11-19 17:15:56.530868] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4d6d90): datao=0, datal=4096, cccid=0 00:17:04.104 [2024-11-19 17:15:56.530873] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x517600) on tqpair(0x4d6d90): expected_datao=0, payload_size=4096 00:17:04.104 [2024-11-19 17:15:56.530877] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.104 [2024-11-19 17:15:56.530884] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.104 [2024-11-19 17:15:56.530888] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.104 [2024-11-19 17:15:56.530916] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.104 [2024-11-19 17:15:56.530922] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.104 [2024-11-19 17:15:56.530925] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.104 [2024-11-19 17:15:56.530929] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517600) on tqpair=0x4d6d90 00:17:04.104 [2024-11-19 17:15:56.530937] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:17:04.104 [2024-11-19 17:15:56.530942] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:17:04.104 [2024-11-19 17:15:56.530946] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:17:04.104 [2024-11-19 17:15:56.530950] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:17:04.104 [2024-11-19 17:15:56.530954] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:17:04.104 [2024-11-19 17:15:56.530959] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:17:04.104 [2024-11-19 17:15:56.530972] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:04.104 [2024-11-19 17:15:56.530980] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.104 [2024-11-19 17:15:56.530983] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.104 [2024-11-19 17:15:56.530987] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4d6d90) 00:17:04.104 [2024-11-19 17:15:56.530994] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:04.104 [2024-11-19 17:15:56.531017] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517600, cid 0, qid 0 00:17:04.104 [2024-11-19 17:15:56.531382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.104 [2024-11-19 17:15:56.531395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.104 [2024-11-19 17:15:56.531399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.104 [2024-11-19 17:15:56.531403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517600) on tqpair=0x4d6d90 00:17:04.104 [2024-11-19 17:15:56.531410] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.104 [2024-11-19 17:15:56.531414] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.104 [2024-11-19 17:15:56.531417] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4d6d90) 00:17:04.104 [2024-11-19 17:15:56.531424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.104 [2024-11-19 17:15:56.531430] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.104 [2024-11-19 17:15:56.531433] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.105 [2024-11-19 17:15:56.531437] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x4d6d90) 00:17:04.105 [2024-11-19 17:15:56.531442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.105 [2024-11-19 17:15:56.531447] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.105 [2024-11-19 17:15:56.531451] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.105 [2024-11-19 17:15:56.531454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x4d6d90) 00:17:04.105 [2024-11-19 17:15:56.531459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.105 [2024-11-19 17:15:56.531466] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.105 [2024-11-19 17:15:56.531469] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.105 [2024-11-19 17:15:56.531473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.105 [2024-11-19 17:15:56.531478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.105 [2024-11-19 17:15:56.531482] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:04.105 [2024-11-19 17:15:56.531495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:04.105 [2024-11-19 17:15:56.531502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.105 [2024-11-19 17:15:56.531505] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4d6d90) 00:17:04.105 [2024-11-19 17:15:56.531512] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.105 [2024-11-19 17:15:56.531533] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517600, cid 0, qid 0 00:17:04.105 [2024-11-19 17:15:56.531540] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517780, cid 1, qid 0 00:17:04.105 [2024-11-19 17:15:56.531544] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517900, cid 2, qid 0 00:17:04.105 [2024-11-19 17:15:56.531556] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.105 [2024-11-19 17:15:56.531560] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517c00, cid 4, qid 0 00:17:04.105 [2024-11-19 17:15:56.532049] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.105 [2024-11-19 17:15:56.532064] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.105 [2024-11-19 17:15:56.532068] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.105 [2024-11-19 17:15:56.532072] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517c00) on tqpair=0x4d6d90 00:17:04.105 [2024-11-19 17:15:56.532077] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:17:04.105 [2024-11-19 17:15:56.532083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:04.105 [2024-11-19 17:15:56.532091] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:17:04.105 [2024-11-19 17:15:56.532103] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:04.105 [2024-11-19 17:15:56.532110] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.105 [2024-11-19 17:15:56.532114] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.105 [2024-11-19 17:15:56.532118] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4d6d90) 00:17:04.105 [2024-11-19 17:15:56.532125] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:04.105 [2024-11-19 17:15:56.532147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517c00, cid 4, qid 0 00:17:04.105 [2024-11-19 17:15:56.532205] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.105 [2024-11-19 17:15:56.532211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.105 [2024-11-19 17:15:56.532214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.105 [2024-11-19 17:15:56.532218] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517c00) on tqpair=0x4d6d90 00:17:04.105 [2024-11-19 17:15:56.532275] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:17:04.105 [2024-11-19 17:15:56.532287] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:04.105 [2024-11-19 17:15:56.532295] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.105 [2024-11-19 17:15:56.532299] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4d6d90) 00:17:04.105 [2024-11-19 17:15:56.532305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.105 [2024-11-19 17:15:56.532325] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517c00, cid 4, qid 0 00:17:04.105 [2024-11-19 17:15:56.532646] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.105 [2024-11-19 17:15:56.532659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.105 [2024-11-19 17:15:56.532663] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.105 [2024-11-19 17:15:56.532667] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4d6d90): datao=0, datal=4096, cccid=4 00:17:04.105 [2024-11-19 17:15:56.532671] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x517c00) on tqpair(0x4d6d90): expected_datao=0, payload_size=4096 00:17:04.105 [2024-11-19 17:15:56.532675] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.105 [2024-11-19 17:15:56.532682] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.105 [2024-11-19 17:15:56.532686] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.105 [2024-11-19 17:15:56.532694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.105 [2024-11-19 17:15:56.532700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.105 [2024-11-19 17:15:56.532703] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.105 [2024-11-19 17:15:56.532706] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517c00) on tqpair=0x4d6d90 00:17:04.105 [2024-11-19 17:15:56.532722] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:17:04.105 [2024-11-19 17:15:56.536747] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:17:04.105 [2024-11-19 17:15:56.536772] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:17:04.105 [2024-11-19 17:15:56.536784] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.105 [2024-11-19 17:15:56.536788] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4d6d90) 00:17:04.105 [2024-11-19 17:15:56.536796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.105 [2024-11-19 17:15:56.536821] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517c00, cid 4, qid 0 00:17:04.105 [2024-11-19 17:15:56.536951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.105 [2024-11-19 17:15:56.536958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.105 [2024-11-19 17:15:56.536961] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.105 [2024-11-19 17:15:56.536964] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4d6d90): datao=0, datal=4096, cccid=4 00:17:04.105 [2024-11-19 17:15:56.536968] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x517c00) on tqpair(0x4d6d90): expected_datao=0, payload_size=4096 00:17:04.105 [2024-11-19 17:15:56.536977] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.105 [2024-11-19 17:15:56.536983] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.105 [2024-11-19 17:15:56.536987] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.105 [2024-11-19 17:15:56.537167] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.105 [2024-11-19 17:15:56.537180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.105 [2024-11-19 17:15:56.537184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.105 [2024-11-19 17:15:56.537188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517c00) on tqpair=0x4d6d90 00:17:04.105 [2024-11-19 17:15:56.537206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:04.105 [2024-11-19 17:15:56.537218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:04.105 [2024-11-19 17:15:56.537226] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.105 [2024-11-19 17:15:56.537230] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4d6d90) 00:17:04.105 [2024-11-19 17:15:56.537237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.105 [2024-11-19 17:15:56.537259] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517c00, cid 4, qid 0 00:17:04.105 [2024-11-19 17:15:56.537406] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.105 [2024-11-19 17:15:56.537412] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.105 [2024-11-19 17:15:56.537415] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.105 [2024-11-19 17:15:56.537419] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4d6d90): datao=0, datal=4096, cccid=4 00:17:04.105 [2024-11-19 17:15:56.537423] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x517c00) on tqpair(0x4d6d90): expected_datao=0, payload_size=4096 00:17:04.105 [2024-11-19 17:15:56.537427] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.105 [2024-11-19 17:15:56.537433] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.105 [2024-11-19 17:15:56.537437] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.105 [2024-11-19 17:15:56.537564] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.105 [2024-11-19 17:15:56.537571] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.105 [2024-11-19 17:15:56.537574] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.105 [2024-11-19 17:15:56.537578] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517c00) on tqpair=0x4d6d90 00:17:04.105 [2024-11-19 17:15:56.537587] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:04.105 [2024-11-19 17:15:56.537596] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:17:04.105 [2024-11-19 17:15:56.537607] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:17:04.105 [2024-11-19 17:15:56.537614] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:04.105 [2024-11-19 17:15:56.537620] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:04.105 [2024-11-19 17:15:56.537625] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:17:04.106 [2024-11-19 17:15:56.537630] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:17:04.106 [2024-11-19 17:15:56.537635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:17:04.106 [2024-11-19 17:15:56.537640] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:17:04.106 [2024-11-19 17:15:56.537655] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.106 [2024-11-19 17:15:56.537659] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4d6d90) 00:17:04.106 [2024-11-19 17:15:56.537666] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.106 [2024-11-19 17:15:56.537673] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.106 [2024-11-19 17:15:56.537677] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.106 [2024-11-19 17:15:56.537680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4d6d90) 00:17:04.106 [2024-11-19 17:15:56.537685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.106 [2024-11-19 17:15:56.537711] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517c00, cid 4, qid 0 00:17:04.106 [2024-11-19 17:15:56.537718] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517d80, cid 5, qid 0 00:17:04.106 [2024-11-19 17:15:56.538073] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.106 [2024-11-19 17:15:56.538088] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.106 [2024-11-19 17:15:56.538091] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.106 [2024-11-19 17:15:56.538095] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517c00) on tqpair=0x4d6d90 00:17:04.106 [2024-11-19 17:15:56.538102] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.106 [2024-11-19 17:15:56.538107] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.106 [2024-11-19 17:15:56.538110] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.106 [2024-11-19 17:15:56.538114] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517d80) on tqpair=0x4d6d90 00:17:04.106 [2024-11-19 17:15:56.538123] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.106 [2024-11-19 17:15:56.538127] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4d6d90) 00:17:04.106 [2024-11-19 17:15:56.538134] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.106 [2024-11-19 17:15:56.538165] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517d80, cid 5, qid 0 00:17:04.106 [2024-11-19 17:15:56.538261] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.106 [2024-11-19 17:15:56.538267] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.106 [2024-11-19 17:15:56.538270] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.106 [2024-11-19 17:15:56.538274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517d80) on tqpair=0x4d6d90 00:17:04.106 [2024-11-19 17:15:56.538284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.106 [2024-11-19 17:15:56.538287] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4d6d90) 00:17:04.106 [2024-11-19 17:15:56.538294] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.106 [2024-11-19 17:15:56.538311] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517d80, cid 5, qid 0 00:17:04.106 [2024-11-19 17:15:56.538400] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.106 [2024-11-19 17:15:56.538408] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.106 [2024-11-19 17:15:56.538412] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.106 [2024-11-19 17:15:56.538415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517d80) on tqpair=0x4d6d90 00:17:04.106 [2024-11-19 17:15:56.538425] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.106 [2024-11-19 17:15:56.538429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4d6d90) 00:17:04.106 [2024-11-19 17:15:56.538436] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.106 [2024-11-19 17:15:56.538456] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517d80, cid 5, qid 0 00:17:04.106 [2024-11-19 17:15:56.538511] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.106 [2024-11-19 17:15:56.538516] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.106 [2024-11-19 17:15:56.538520] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.106 [2024-11-19 17:15:56.538524] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517d80) on tqpair=0x4d6d90 00:17:04.106 [2024-11-19 17:15:56.538543] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.106 [2024-11-19 17:15:56.538548] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4d6d90) 00:17:04.106 [2024-11-19 17:15:56.538555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.106 [2024-11-19 17:15:56.538562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.106 [2024-11-19 17:15:56.538566] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4d6d90) 00:17:04.106 [2024-11-19 17:15:56.538573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.106 [2024-11-19 17:15:56.538580] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.106 [2024-11-19 17:15:56.538583] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x4d6d90) 00:17:04.106 [2024-11-19 17:15:56.538589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.106 [2024-11-19 17:15:56.538596] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.106 [2024-11-19 17:15:56.538600] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x4d6d90) 00:17:04.106 [2024-11-19 17:15:56.538605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.106 [2024-11-19 17:15:56.538625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517d80, cid 5, qid 0 00:17:04.106 [2024-11-19 17:15:56.538632] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517c00, cid 4, qid 0 00:17:04.106 [2024-11-19 17:15:56.538636] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517f00, cid 6, qid 0 00:17:04.106 [2024-11-19 17:15:56.538640] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x518080, cid 7, qid 0 00:17:04.106 [2024-11-19 17:15:56.539141] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.106 ===================================================== 00:17:04.106 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:04.106 ===================================================== 00:17:04.106 Controller Capabilities/Features 00:17:04.106 ================================ 00:17:04.106 Vendor ID: 8086 00:17:04.106 Subsystem Vendor ID: 8086 00:17:04.106 Serial Number: SPDK00000000000001 00:17:04.106 Model Number: SPDK bdev Controller 00:17:04.106 Firmware Version: 25.01 00:17:04.106 Recommended Arb Burst: 6 00:17:04.106 IEEE OUI Identifier: e4 d2 5c 00:17:04.106 Multi-path I/O 00:17:04.106 May have multiple subsystem ports: Yes 00:17:04.106 May have multiple controllers: Yes 00:17:04.106 Associated with SR-IOV VF: No 00:17:04.106 Max Data Transfer Size: 131072 00:17:04.106 Max Number of Namespaces: 32 00:17:04.106 Max Number of I/O Queues: 127 00:17:04.106 NVMe Specification Version (VS): 1.3 00:17:04.106 NVMe Specification Version (Identify): 1.3 00:17:04.106 Maximum Queue Entries: 128 00:17:04.106 Contiguous Queues Required: Yes 00:17:04.106 Arbitration Mechanisms Supported 00:17:04.106 Weighted Round Robin: Not Supported 00:17:04.106 Vendor Specific: Not Supported 00:17:04.106 Reset Timeout: 15000 ms 00:17:04.106 Doorbell Stride: 4 bytes 00:17:04.106 NVM Subsystem Reset: Not Supported 00:17:04.106 Command Sets Supported 00:17:04.106 NVM Command Set: Supported 00:17:04.106 Boot Partition: Not Supported 00:17:04.106 Memory Page Size Minimum: 4096 bytes 00:17:04.106 Memory Page Size Maximum: 4096 bytes 00:17:04.106 Persistent Memory Region: Not Supported 00:17:04.106 Optional Asynchronous Events Supported 00:17:04.106 Namespace Attribute Notices: Supported 00:17:04.106 Firmware Activation Notices: Not Supported 00:17:04.106 ANA Change Notices: Not Supported 00:17:04.106 PLE Aggregate Log Change Notices: Not Supported 00:17:04.106 LBA Status Info Alert Notices: Not Supported 00:17:04.106 EGE Aggregate Log Change Notices: Not Supported 00:17:04.106 Normal NVM Subsystem Shutdown event: Not Supported 00:17:04.106 Zone Descriptor Change Notices: Not Supported 00:17:04.106 Discovery Log Change Notices: Not Supported 00:17:04.106 Controller Attributes 00:17:04.106 128-bit Host Identifier: Supported 00:17:04.106 Non-Operational Permissive Mode: Not Supported 00:17:04.106 NVM Sets: Not Supported 00:17:04.106 Read Recovery Levels: Not Supported 00:17:04.106 Endurance Groups: Not Supported 00:17:04.106 Predictable Latency Mode: Not Supported 00:17:04.106 Traffic Based Keep ALive: Not Supported 00:17:04.106 Namespace Granularity: Not Supported 00:17:04.106 SQ Associations: Not Supported 00:17:04.106 UUID List: Not Supported 00:17:04.106 Multi-Domain Subsystem: Not Supported 00:17:04.106 Fixed Capacity Management: Not Supported 00:17:04.106 Variable Capacity Management: Not Supported 00:17:04.106 Delete Endurance Group: Not Supported 00:17:04.106 Delete NVM Set: Not Supported 00:17:04.106 Extended LBA Formats Supported: Not Supported 00:17:04.106 Flexible Data Placement Supported: Not Supported 00:17:04.106 00:17:04.106 Controller Memory Buffer Support 00:17:04.106 ================================ 00:17:04.106 Supported: No 00:17:04.106 00:17:04.106 Persistent Memory Region Support 00:17:04.106 ================================ 00:17:04.107 Supported: No 00:17:04.107 00:17:04.107 Admin Command Set Attributes 00:17:04.107 ============================ 00:17:04.107 Security Send/Receive: Not Supported 00:17:04.107 Format NVM: Not Supported 00:17:04.107 Firmware Activate/Download: Not Supported 00:17:04.107 Namespace Management: Not Supported 00:17:04.107 Device Self-Test: Not Supported 00:17:04.107 Directives: Not Supported 00:17:04.107 NVMe-MI: Not Supported 00:17:04.107 Virtualization Management: Not Supported 00:17:04.107 Doorbell Buffer Config: Not Supported 00:17:04.107 Get LBA Status Capability: Not Supported 00:17:04.107 Command & Feature Lockdown Capability: Not Supported 00:17:04.107 Abort Command Limit: 4 00:17:04.107 Async Event Request Limit: 4 00:17:04.107 Number of Firmware Slots: N/A 00:17:04.107 Firmware Slot 1 Read-Only: N/A 00:17:04.107 Firmware Activation Without Reset: N/A 00:17:04.107 Multiple Update Detection Support: N/A 00:17:04.107 Firmware Update Granularity: No Information Provided 00:17:04.107 Per-Namespace SMART Log: No 00:17:04.107 Asymmetric Namespace Access Log Page: Not Supported 00:17:04.107 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:04.107 Command Effects Log Page: Supported 00:17:04.107 Get Log Page Extended Data: Supported 00:17:04.107 Telemetry Log Pages: Not Supported 00:17:04.107 Persistent Event Log Pages: Not Supported 00:17:04.107 Supported Log Pages Log Page: May Support 00:17:04.107 Commands Supported & Effects Log Page: Not Supported 00:17:04.107 Feature Identifiers & Effects Log Page:May Support 00:17:04.107 NVMe-MI Commands & Effects Log Page: May Support 00:17:04.107 Data Area 4 for Telemetry Log: Not Supported 00:17:04.107 Error Log Page Entries Supported: 128 00:17:04.107 Keep Alive: Supported 00:17:04.107 Keep Alive Granularity: 10000 ms 00:17:04.107 00:17:04.107 NVM Command Set Attributes 00:17:04.107 ========================== 00:17:04.107 Submission Queue Entry Size 00:17:04.107 Max: 64 00:17:04.107 Min: 64 00:17:04.107 Completion Queue Entry Size 00:17:04.107 Max: 16 00:17:04.107 Min: 16 00:17:04.107 Number of Namespaces: 32 00:17:04.107 Compare Command: Supported 00:17:04.107 Write Uncorrectable Command: Not Supported 00:17:04.107 Dataset Management Command: Supported 00:17:04.107 Write Zeroes Command: Supported 00:17:04.107 Set Features Save Field: Not Supported 00:17:04.107 Reservations: Supported 00:17:04.107 Timestamp: Not Supported 00:17:04.107 Copy: Supported 00:17:04.107 Volatile Write Cache: Present 00:17:04.107 Atomic Write Unit (Normal): 1 00:17:04.107 Atomic Write Unit (PFail): 1 00:17:04.107 Atomic Compare & Write Unit: 1 00:17:04.107 Fused Compare & Write: Supported 00:17:04.107 Scatter-Gather List 00:17:04.107 SGL Command Set: Supported 00:17:04.107 SGL Keyed: Supported 00:17:04.107 SGL Bit Bucket Descriptor: Not Supported 00:17:04.107 SGL Metadata Pointer: Not Supported 00:17:04.107 Oversized SGL: Not Supported 00:17:04.107 SGL Metadata Address: Not Supported 00:17:04.107 SGL Offset: Supported 00:17:04.107 Transport SGL Data Block: Not Supported 00:17:04.107 Replay Protected Memory Block: Not Supported 00:17:04.107 00:17:04.107 Firmware Slot Information 00:17:04.107 ========================= 00:17:04.107 Active slot: 1 00:17:04.107 Slot 1 Firmware Revision: 25.01 00:17:04.107 00:17:04.107 00:17:04.107 Commands Supported and Effects 00:17:04.107 ============================== 00:17:04.107 Admin Commands 00:17:04.107 -------------- 00:17:04.107 Get Log Page (02h): Supported 00:17:04.107 Identify (06h): Supported 00:17:04.107 Abort (08h): Supported 00:17:04.107 Set Features (09h): Supported 00:17:04.107 Get Features (0Ah): Supported 00:17:04.107 Asynchronous Event Request (0Ch): Supported 00:17:04.107 Keep Alive (18h): Supported 00:17:04.107 I/O Commands 00:17:04.107 ------------ 00:17:04.107 Flush (00h): Supported LBA-Change 00:17:04.107 Write (01h): Supported LBA-Change 00:17:04.107 Read (02h): Supported 00:17:04.107 Compare (05h): Supported 00:17:04.107 Write Zeroes (08h): Supported LBA-Change 00:17:04.107 Dataset Management (09h): Supported LBA-Change 00:17:04.107 Copy (19h): Supported LBA-Change 00:17:04.107 00:17:04.107 Error Log 00:17:04.107 ========= 00:17:04.107 00:17:04.107 Arbitration 00:17:04.107 =========== 00:17:04.107 Arbitration Burst: 1 00:17:04.107 00:17:04.107 Power Management 00:17:04.107 ================ 00:17:04.107 Number of Power States: 1 00:17:04.107 Current Power State: Power State #0 00:17:04.107 Power State #0: 00:17:04.107 Max Power: 0.00 W 00:17:04.107 Non-Operational State: Operational 00:17:04.107 Entry Latency: Not Reported 00:17:04.107 Exit Latency: Not Reported 00:17:04.107 Relative Read Throughput: 0 00:17:04.107 Relative Read Latency: 0 00:17:04.107 Relative Write Throughput: 0 00:17:04.107 Relative Write Latency: 0 00:17:04.107 Idle Power: Not Reported 00:17:04.107 Active Power: Not Reported 00:17:04.107 Non-Operational Permissive Mode: Not Supported 00:17:04.107 00:17:04.107 Health Information 00:17:04.107 ================== 00:17:04.107 Critical Warnings: 00:17:04.107 Available Spare Space: OK 00:17:04.107 Temperature: OK 00:17:04.107 Device Reliability: OK 00:17:04.107 Read Only: No 00:17:04.107 Volatile Memory Backup: OK 00:17:04.107 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:04.107 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:04.107 Available Spare: 0% 00:17:04.107 Available Spare Threshold: 0% 00:17:04.107 Life Percentage Used:[2024-11-19 17:15:56.542769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.107 [2024-11-19 17:15:56.542794] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.107 [2024-11-19 17:15:56.542799] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4d6d90): datao=0, datal=8192, cccid=5 00:17:04.107 [2024-11-19 17:15:56.542804] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x517d80) on tqpair(0x4d6d90): expected_datao=0, payload_size=8192 00:17:04.107 [2024-11-19 17:15:56.542809] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.107 [2024-11-19 17:15:56.542832] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.107 [2024-11-19 17:15:56.542838] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.107 [2024-11-19 17:15:56.542843] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.107 [2024-11-19 17:15:56.542848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.107 [2024-11-19 17:15:56.542851] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.107 [2024-11-19 17:15:56.542855] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4d6d90): datao=0, datal=512, cccid=4 00:17:04.107 [2024-11-19 17:15:56.542859] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x517c00) on tqpair(0x4d6d90): expected_datao=0, payload_size=512 00:17:04.107 [2024-11-19 17:15:56.542863] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.107 [2024-11-19 17:15:56.542869] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.107 [2024-11-19 17:15:56.542872] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.107 [2024-11-19 17:15:56.542877] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.107 [2024-11-19 17:15:56.542882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.107 [2024-11-19 17:15:56.542885] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.107 [2024-11-19 17:15:56.542888] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4d6d90): datao=0, datal=512, cccid=6 00:17:04.107 [2024-11-19 17:15:56.542892] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x517f00) on tqpair(0x4d6d90): expected_datao=0, payload_size=512 00:17:04.107 [2024-11-19 17:15:56.542896] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.107 [2024-11-19 17:15:56.542902] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.107 [2024-11-19 17:15:56.542905] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.107 [2024-11-19 17:15:56.542910] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.107 [2024-11-19 17:15:56.542915] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.107 [2024-11-19 17:15:56.542918] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.107 [2024-11-19 17:15:56.542921] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4d6d90): datao=0, datal=4096, cccid=7 00:17:04.107 [2024-11-19 17:15:56.542925] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x518080) on tqpair(0x4d6d90): expected_datao=0, payload_size=4096 00:17:04.107 [2024-11-19 17:15:56.542929] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.107 [2024-11-19 17:15:56.542935] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.107 [2024-11-19 17:15:56.542938] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.107 [2024-11-19 17:15:56.542943] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.107 [2024-11-19 17:15:56.542948] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.107 [2024-11-19 17:15:56.542951] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.107 [2024-11-19 17:15:56.542955] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517d80) on tqpair=0x4d6d90 00:17:04.107 [2024-11-19 17:15:56.542973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.107 [2024-11-19 17:15:56.542980] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.107 [2024-11-19 17:15:56.542983] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.107 [2024-11-19 17:15:56.542986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517c00) on tqpair=0x4d6d90 00:17:04.108 [2024-11-19 17:15:56.543000] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.108 [2024-11-19 17:15:56.543005] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.108 [2024-11-19 17:15:56.543008] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.108 [2024-11-19 17:15:56.543012] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517f00) on tqpair=0x4d6d90 00:17:04.108 [2024-11-19 17:15:56.543019] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.108 [2024-11-19 17:15:56.543024] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.108 [2024-11-19 17:15:56.543027] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.108 [2024-11-19 17:15:56.543031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x518080) on tqpair=0x4d6d90 00:17:04.108 [2024-11-19 17:15:56.543125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.108 [2024-11-19 17:15:56.543132] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x4d6d90) 00:17:04.108 [2024-11-19 17:15:56.543140] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.108 [2024-11-19 17:15:56.543173] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x518080, cid 7, qid 0 00:17:04.108 [2024-11-19 17:15:56.543250] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.108 [2024-11-19 17:15:56.543256] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.108 [2024-11-19 17:15:56.543259] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.108 [2024-11-19 17:15:56.543263] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x518080) on tqpair=0x4d6d90 00:17:04.108 [2024-11-19 17:15:56.543300] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:17:04.108 [2024-11-19 17:15:56.543310] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517600) on tqpair=0x4d6d90 00:17:04.108 [2024-11-19 17:15:56.543317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.108 [2024-11-19 17:15:56.543322] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517780) on tqpair=0x4d6d90 00:17:04.108 [2024-11-19 17:15:56.543326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.108 [2024-11-19 17:15:56.543331] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517900) on tqpair=0x4d6d90 00:17:04.108 [2024-11-19 17:15:56.543335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.108 [2024-11-19 17:15:56.543340] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.108 [2024-11-19 17:15:56.543344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.108 [2024-11-19 17:15:56.543352] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.108 [2024-11-19 17:15:56.543356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.108 [2024-11-19 17:15:56.543360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.108 [2024-11-19 17:15:56.543366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.108 [2024-11-19 17:15:56.543390] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.108 [2024-11-19 17:15:56.543439] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.108 [2024-11-19 17:15:56.543445] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.108 [2024-11-19 17:15:56.543448] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.108 [2024-11-19 17:15:56.543452] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.108 [2024-11-19 17:15:56.543461] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.108 [2024-11-19 17:15:56.543465] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.108 [2024-11-19 17:15:56.543469] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.108 [2024-11-19 17:15:56.543475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.108 [2024-11-19 17:15:56.543497] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.108 [2024-11-19 17:15:56.543602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.108 [2024-11-19 17:15:56.543608] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.108 [2024-11-19 17:15:56.543611] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.108 [2024-11-19 17:15:56.543615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.108 [2024-11-19 17:15:56.543620] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:17:04.108 [2024-11-19 17:15:56.543624] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:17:04.108 [2024-11-19 17:15:56.543633] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.108 [2024-11-19 17:15:56.543637] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.108 [2024-11-19 17:15:56.543640] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.108 [2024-11-19 17:15:56.543647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.108 [2024-11-19 17:15:56.543665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.108 [2024-11-19 17:15:56.543718] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.108 [2024-11-19 17:15:56.543723] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.108 [2024-11-19 17:15:56.543727] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.108 [2024-11-19 17:15:56.543730] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.108 [2024-11-19 17:15:56.543755] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.108 [2024-11-19 17:15:56.543761] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.108 [2024-11-19 17:15:56.543765] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.108 [2024-11-19 17:15:56.543772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.108 [2024-11-19 17:15:56.543792] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.108 [2024-11-19 17:15:56.543855] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.108 [2024-11-19 17:15:56.543861] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.108 [2024-11-19 17:15:56.543864] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.108 [2024-11-19 17:15:56.543867] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.108 [2024-11-19 17:15:56.543877] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.108 [2024-11-19 17:15:56.543881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.108 [2024-11-19 17:15:56.543884] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.108 [2024-11-19 17:15:56.543891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.108 [2024-11-19 17:15:56.543909] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.108 [2024-11-19 17:15:56.543971] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.108 [2024-11-19 17:15:56.543977] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.108 [2024-11-19 17:15:56.543989] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.108 [2024-11-19 17:15:56.543993] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.108 [2024-11-19 17:15:56.544003] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.108 [2024-11-19 17:15:56.544007] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.108 [2024-11-19 17:15:56.544010] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.108 [2024-11-19 17:15:56.544017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.108 [2024-11-19 17:15:56.544036] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.108 [2024-11-19 17:15:56.544091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.108 [2024-11-19 17:15:56.544097] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.108 [2024-11-19 17:15:56.544100] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.108 [2024-11-19 17:15:56.544104] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.108 [2024-11-19 17:15:56.544113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.108 [2024-11-19 17:15:56.544117] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.108 [2024-11-19 17:15:56.544120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.108 [2024-11-19 17:15:56.544127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.108 [2024-11-19 17:15:56.544144] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.108 [2024-11-19 17:15:56.544196] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.108 [2024-11-19 17:15:56.544202] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.109 [2024-11-19 17:15:56.544205] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.544209] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.109 [2024-11-19 17:15:56.544218] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.544222] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.544225] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.109 [2024-11-19 17:15:56.544232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.109 [2024-11-19 17:15:56.544249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.109 [2024-11-19 17:15:56.544321] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.109 [2024-11-19 17:15:56.544327] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.109 [2024-11-19 17:15:56.544331] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.544334] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.109 [2024-11-19 17:15:56.544343] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.544348] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.544351] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.109 [2024-11-19 17:15:56.544358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.109 [2024-11-19 17:15:56.544374] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.109 [2024-11-19 17:15:56.544460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.109 [2024-11-19 17:15:56.544466] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.109 [2024-11-19 17:15:56.544469] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.544474] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.109 [2024-11-19 17:15:56.544484] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.544488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.544491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.109 [2024-11-19 17:15:56.544498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.109 [2024-11-19 17:15:56.544515] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.109 [2024-11-19 17:15:56.544564] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.109 [2024-11-19 17:15:56.544570] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.109 [2024-11-19 17:15:56.544573] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.544577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.109 [2024-11-19 17:15:56.544586] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.544590] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.544593] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.109 [2024-11-19 17:15:56.544600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.109 [2024-11-19 17:15:56.544632] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.109 [2024-11-19 17:15:56.544680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.109 [2024-11-19 17:15:56.544686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.109 [2024-11-19 17:15:56.544689] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.544693] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.109 [2024-11-19 17:15:56.544702] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.544707] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.544710] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.109 [2024-11-19 17:15:56.544717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.109 [2024-11-19 17:15:56.544734] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.109 [2024-11-19 17:15:56.544786] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.109 [2024-11-19 17:15:56.544793] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.109 [2024-11-19 17:15:56.544797] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.544800] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.109 [2024-11-19 17:15:56.544810] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.544815] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.544818] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.109 [2024-11-19 17:15:56.544826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.109 [2024-11-19 17:15:56.544846] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.109 [2024-11-19 17:15:56.544897] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.109 [2024-11-19 17:15:56.544903] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.109 [2024-11-19 17:15:56.544907] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.544910] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.109 [2024-11-19 17:15:56.544921] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.544926] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.544929] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.109 [2024-11-19 17:15:56.544936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.109 [2024-11-19 17:15:56.544954] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.109 [2024-11-19 17:15:56.545031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.109 [2024-11-19 17:15:56.545037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.109 [2024-11-19 17:15:56.545040] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.545044] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.109 [2024-11-19 17:15:56.545054] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.545058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.545061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.109 [2024-11-19 17:15:56.545068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.109 [2024-11-19 17:15:56.545085] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.109 [2024-11-19 17:15:56.545139] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.109 [2024-11-19 17:15:56.545145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.109 [2024-11-19 17:15:56.545148] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.545152] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.109 [2024-11-19 17:15:56.545161] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.545165] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.545169] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.109 [2024-11-19 17:15:56.545176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.109 [2024-11-19 17:15:56.545192] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.109 [2024-11-19 17:15:56.545244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.109 [2024-11-19 17:15:56.545250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.109 [2024-11-19 17:15:56.545254] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.545257] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.109 [2024-11-19 17:15:56.545267] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.545271] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.545275] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.109 [2024-11-19 17:15:56.545281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.109 [2024-11-19 17:15:56.545299] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.109 [2024-11-19 17:15:56.545365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.109 [2024-11-19 17:15:56.545372] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.109 [2024-11-19 17:15:56.545375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.545379] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.109 [2024-11-19 17:15:56.545390] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.545394] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.545397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.109 [2024-11-19 17:15:56.545404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.109 [2024-11-19 17:15:56.545422] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.109 [2024-11-19 17:15:56.545487] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.109 [2024-11-19 17:15:56.545493] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.109 [2024-11-19 17:15:56.545497] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.545500] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.109 [2024-11-19 17:15:56.545510] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.545514] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.109 [2024-11-19 17:15:56.545517] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.109 [2024-11-19 17:15:56.545524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.109 [2024-11-19 17:15:56.545541] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.109 [2024-11-19 17:15:56.545590] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.109 [2024-11-19 17:15:56.545596] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.110 [2024-11-19 17:15:56.545599] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.545603] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.110 [2024-11-19 17:15:56.545628] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.545632] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.545635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.110 [2024-11-19 17:15:56.545642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.110 [2024-11-19 17:15:56.545659] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.110 [2024-11-19 17:15:56.545711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.110 [2024-11-19 17:15:56.545717] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.110 [2024-11-19 17:15:56.545720] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.545724] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.110 [2024-11-19 17:15:56.545733] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.545737] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.545740] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.110 [2024-11-19 17:15:56.545747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.110 [2024-11-19 17:15:56.545775] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.110 [2024-11-19 17:15:56.545845] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.110 [2024-11-19 17:15:56.545851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.110 [2024-11-19 17:15:56.545854] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.545858] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.110 [2024-11-19 17:15:56.545869] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.545873] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.545877] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.110 [2024-11-19 17:15:56.545884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.110 [2024-11-19 17:15:56.545902] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.110 [2024-11-19 17:15:56.545968] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.110 [2024-11-19 17:15:56.545973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.110 [2024-11-19 17:15:56.545977] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.545980] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.110 [2024-11-19 17:15:56.545990] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.545994] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.545997] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.110 [2024-11-19 17:15:56.546004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.110 [2024-11-19 17:15:56.546021] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.110 [2024-11-19 17:15:56.546081] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.110 [2024-11-19 17:15:56.546087] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.110 [2024-11-19 17:15:56.546090] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.546094] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.110 [2024-11-19 17:15:56.546103] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.546108] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.546111] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.110 [2024-11-19 17:15:56.546118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.110 [2024-11-19 17:15:56.546135] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.110 [2024-11-19 17:15:56.546186] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.110 [2024-11-19 17:15:56.546192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.110 [2024-11-19 17:15:56.546195] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.546199] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.110 [2024-11-19 17:15:56.546208] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.546213] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.546216] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.110 [2024-11-19 17:15:56.546223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.110 [2024-11-19 17:15:56.546240] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.110 [2024-11-19 17:15:56.546290] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.110 [2024-11-19 17:15:56.546296] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.110 [2024-11-19 17:15:56.546300] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.546303] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.110 [2024-11-19 17:15:56.546314] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.546319] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.546322] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.110 [2024-11-19 17:15:56.546329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.110 [2024-11-19 17:15:56.546346] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.110 [2024-11-19 17:15:56.546395] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.110 [2024-11-19 17:15:56.546401] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.110 [2024-11-19 17:15:56.546404] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.546408] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.110 [2024-11-19 17:15:56.546417] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.546422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.546425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.110 [2024-11-19 17:15:56.546432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.110 [2024-11-19 17:15:56.546464] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.110 [2024-11-19 17:15:56.546514] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.110 [2024-11-19 17:15:56.546519] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.110 [2024-11-19 17:15:56.546523] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.546526] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.110 [2024-11-19 17:15:56.546535] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.546539] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.546543] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.110 [2024-11-19 17:15:56.546549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.110 [2024-11-19 17:15:56.546567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.110 [2024-11-19 17:15:56.546615] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.110 [2024-11-19 17:15:56.546620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.110 [2024-11-19 17:15:56.546624] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.546627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.110 [2024-11-19 17:15:56.546637] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.546641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.546644] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.110 [2024-11-19 17:15:56.546651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.110 [2024-11-19 17:15:56.546668] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.110 [2024-11-19 17:15:56.546744] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.110 [2024-11-19 17:15:56.546749] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.110 [2024-11-19 17:15:56.546753] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.546756] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.110 [2024-11-19 17:15:56.546777] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.546783] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.546787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.110 [2024-11-19 17:15:56.546794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.110 [2024-11-19 17:15:56.546813] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.110 [2024-11-19 17:15:56.546894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.110 [2024-11-19 17:15:56.546900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.110 [2024-11-19 17:15:56.546903] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.546907] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.110 [2024-11-19 17:15:56.546917] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.546921] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.110 [2024-11-19 17:15:56.546924] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.110 [2024-11-19 17:15:56.546931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.110 [2024-11-19 17:15:56.546948] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.111 [2024-11-19 17:15:56.547033] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.111 [2024-11-19 17:15:56.547039] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.111 [2024-11-19 17:15:56.547042] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.111 [2024-11-19 17:15:56.547046] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.111 [2024-11-19 17:15:56.547055] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.111 [2024-11-19 17:15:56.547059] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.111 [2024-11-19 17:15:56.547078] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.111 [2024-11-19 17:15:56.547085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.111 [2024-11-19 17:15:56.547102] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.111 [2024-11-19 17:15:56.547200] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.111 [2024-11-19 17:15:56.547206] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.111 [2024-11-19 17:15:56.547209] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.111 [2024-11-19 17:15:56.547213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.111 [2024-11-19 17:15:56.547223] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.111 [2024-11-19 17:15:56.547227] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.111 [2024-11-19 17:15:56.547231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.111 [2024-11-19 17:15:56.547237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.111 [2024-11-19 17:15:56.547254] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.111 [2024-11-19 17:15:56.547331] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.111 [2024-11-19 17:15:56.547337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.111 [2024-11-19 17:15:56.547340] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.111 [2024-11-19 17:15:56.547344] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.111 [2024-11-19 17:15:56.547355] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.111 [2024-11-19 17:15:56.547359] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.111 [2024-11-19 17:15:56.547362] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.111 [2024-11-19 17:15:56.547369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.111 [2024-11-19 17:15:56.547399] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.111 [2024-11-19 17:15:56.547495] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.111 [2024-11-19 17:15:56.547501] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.111 [2024-11-19 17:15:56.547504] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.111 [2024-11-19 17:15:56.547508] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.111 [2024-11-19 17:15:56.547517] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.111 [2024-11-19 17:15:56.547522] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.111 [2024-11-19 17:15:56.547525] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.111 [2024-11-19 17:15:56.547531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.111 [2024-11-19 17:15:56.547548] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.111 [2024-11-19 17:15:56.547625] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.111 [2024-11-19 17:15:56.547631] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.111 [2024-11-19 17:15:56.547634] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.111 [2024-11-19 17:15:56.547638] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.111 [2024-11-19 17:15:56.547647] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.111 [2024-11-19 17:15:56.547651] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.111 [2024-11-19 17:15:56.547655] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.111 [2024-11-19 17:15:56.547661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.111 [2024-11-19 17:15:56.547678] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.111 [2024-11-19 17:15:56.551792] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.111 [2024-11-19 17:15:56.551810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.111 [2024-11-19 17:15:56.551815] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.111 [2024-11-19 17:15:56.551819] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.111 [2024-11-19 17:15:56.551832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.111 [2024-11-19 17:15:56.551837] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.111 [2024-11-19 17:15:56.551841] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d6d90) 00:17:04.111 [2024-11-19 17:15:56.551849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.111 [2024-11-19 17:15:56.551876] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517a80, cid 3, qid 0 00:17:04.111 [2024-11-19 17:15:56.552003] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.111 [2024-11-19 17:15:56.552010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.111 [2024-11-19 17:15:56.552013] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.111 [2024-11-19 17:15:56.552017] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x517a80) on tqpair=0x4d6d90 00:17:04.111 [2024-11-19 17:15:56.552025] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 8 milliseconds 00:17:04.111 0% 00:17:04.111 Data Units Read: 0 00:17:04.111 Data Units Written: 0 00:17:04.111 Host Read Commands: 0 00:17:04.111 Host Write Commands: 0 00:17:04.111 Controller Busy Time: 0 minutes 00:17:04.111 Power Cycles: 0 00:17:04.111 Power On Hours: 0 hours 00:17:04.111 Unsafe Shutdowns: 0 00:17:04.111 Unrecoverable Media Errors: 0 00:17:04.111 Lifetime Error Log Entries: 0 00:17:04.111 Warning Temperature Time: 0 minutes 00:17:04.111 Critical Temperature Time: 0 minutes 00:17:04.111 00:17:04.111 Number of Queues 00:17:04.111 ================ 00:17:04.111 Number of I/O Submission Queues: 127 00:17:04.111 Number of I/O Completion Queues: 127 00:17:04.111 00:17:04.111 Active Namespaces 00:17:04.111 ================= 00:17:04.111 Namespace ID:1 00:17:04.111 Error Recovery Timeout: Unlimited 00:17:04.111 Command Set Identifier: NVM (00h) 00:17:04.111 Deallocate: Supported 00:17:04.111 Deallocated/Unwritten Error: Not Supported 00:17:04.111 Deallocated Read Value: Unknown 00:17:04.111 Deallocate in Write Zeroes: Not Supported 00:17:04.111 Deallocated Guard Field: 0xFFFF 00:17:04.111 Flush: Supported 00:17:04.111 Reservation: Supported 00:17:04.111 Namespace Sharing Capabilities: Multiple Controllers 00:17:04.111 Size (in LBAs): 131072 (0GiB) 00:17:04.111 Capacity (in LBAs): 131072 (0GiB) 00:17:04.111 Utilization (in LBAs): 131072 (0GiB) 00:17:04.111 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:04.111 EUI64: ABCDEF0123456789 00:17:04.111 UUID: ad1ad11e-ccbb-414a-ab78-4d766d407952 00:17:04.111 Thin Provisioning: Not Supported 00:17:04.111 Per-NS Atomic Units: Yes 00:17:04.111 Atomic Boundary Size (Normal): 0 00:17:04.111 Atomic Boundary Size (PFail): 0 00:17:04.111 Atomic Boundary Offset: 0 00:17:04.111 Maximum Single Source Range Length: 65535 00:17:04.111 Maximum Copy Length: 65535 00:17:04.111 Maximum Source Range Count: 1 00:17:04.111 NGUID/EUI64 Never Reused: No 00:17:04.111 Namespace Write Protected: No 00:17:04.111 Number of LBA Formats: 1 00:17:04.111 Current LBA Format: LBA Format #00 00:17:04.111 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:04.111 00:17:04.111 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:04.111 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:04.111 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.111 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.371 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.371 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:04.371 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:04.371 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:04.371 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:17:04.371 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:04.371 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:17:04.371 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:04.371 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:04.371 rmmod nvme_tcp 00:17:04.371 rmmod nvme_fabrics 00:17:04.371 rmmod nvme_keyring 00:17:04.371 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:04.371 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:17:04.371 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:17:04.371 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 87302 ']' 00:17:04.371 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 87302 00:17:04.371 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 87302 ']' 00:17:04.371 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 87302 00:17:04.371 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:17:04.371 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:04.371 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87302 00:17:04.371 killing process with pid 87302 00:17:04.371 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:04.371 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:04.371 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87302' 00:17:04.371 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 87302 00:17:04.371 17:15:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 87302 00:17:04.630 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:04.630 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:04.630 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:04.630 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:17:04.630 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:17:04.630 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:04.630 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:17:04.630 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:04.630 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:04.630 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:04.630 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:04.630 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:04.630 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:04.630 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:04.630 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:04.630 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:04.630 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:04.630 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:04.889 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:04.889 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:04.889 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:04.889 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:04.889 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:04.889 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.889 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:04.889 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.889 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:17:04.889 00:17:04.889 real 0m2.542s 00:17:04.889 user 0m5.432s 00:17:04.889 sys 0m0.862s 00:17:04.889 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:04.889 17:15:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.889 ************************************ 00:17:04.889 END TEST nvmf_identify 00:17:04.889 ************************************ 00:17:04.889 17:15:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:04.889 17:15:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:04.889 17:15:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:04.889 17:15:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.889 ************************************ 00:17:04.889 START TEST nvmf_perf 00:17:04.889 ************************************ 00:17:04.889 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:05.149 * Looking for test storage... 00:17:05.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:05.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.149 --rc genhtml_branch_coverage=1 00:17:05.149 --rc genhtml_function_coverage=1 00:17:05.149 --rc genhtml_legend=1 00:17:05.149 --rc geninfo_all_blocks=1 00:17:05.149 --rc geninfo_unexecuted_blocks=1 00:17:05.149 00:17:05.149 ' 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:05.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.149 --rc genhtml_branch_coverage=1 00:17:05.149 --rc genhtml_function_coverage=1 00:17:05.149 --rc genhtml_legend=1 00:17:05.149 --rc geninfo_all_blocks=1 00:17:05.149 --rc geninfo_unexecuted_blocks=1 00:17:05.149 00:17:05.149 ' 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:05.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.149 --rc genhtml_branch_coverage=1 00:17:05.149 --rc genhtml_function_coverage=1 00:17:05.149 --rc genhtml_legend=1 00:17:05.149 --rc geninfo_all_blocks=1 00:17:05.149 --rc geninfo_unexecuted_blocks=1 00:17:05.149 00:17:05.149 ' 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:05.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.149 --rc genhtml_branch_coverage=1 00:17:05.149 --rc genhtml_function_coverage=1 00:17:05.149 --rc genhtml_legend=1 00:17:05.149 --rc geninfo_all_blocks=1 00:17:05.149 --rc geninfo_unexecuted_blocks=1 00:17:05.149 00:17:05.149 ' 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:05.149 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:05.150 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:05.150 Cannot find device "nvmf_init_br" 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:05.150 Cannot find device "nvmf_init_br2" 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:05.150 Cannot find device "nvmf_tgt_br" 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:05.150 Cannot find device "nvmf_tgt_br2" 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:05.150 Cannot find device "nvmf_init_br" 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:05.150 Cannot find device "nvmf_init_br2" 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:05.150 Cannot find device "nvmf_tgt_br" 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:05.150 Cannot find device "nvmf_tgt_br2" 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:17:05.150 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:05.409 Cannot find device "nvmf_br" 00:17:05.409 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:17:05.409 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:05.409 Cannot find device "nvmf_init_if" 00:17:05.409 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:05.410 Cannot find device "nvmf_init_if2" 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:05.410 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:05.410 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:05.410 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:05.669 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:05.669 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:05.669 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:17:05.669 00:17:05.669 --- 10.0.0.3 ping statistics --- 00:17:05.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.669 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:17:05.669 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:05.669 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:05.669 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:17:05.669 00:17:05.669 --- 10.0.0.4 ping statistics --- 00:17:05.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.669 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:05.669 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:05.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:05.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:17:05.669 00:17:05.669 --- 10.0.0.1 ping statistics --- 00:17:05.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.669 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:05.669 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:05.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:05.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:17:05.669 00:17:05.669 --- 10.0.0.2 ping statistics --- 00:17:05.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.669 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:05.669 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:05.669 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:17:05.669 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:05.669 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:05.669 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:05.669 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:05.669 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:05.669 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:05.669 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:05.669 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:05.669 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:05.669 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:05.670 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:05.670 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=87568 00:17:05.670 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:05.670 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 87568 00:17:05.670 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 87568 ']' 00:17:05.670 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.670 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.670 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.670 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.670 17:15:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:05.670 [2024-11-19 17:15:58.054946] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:17:05.670 [2024-11-19 17:15:58.055065] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.929 [2024-11-19 17:15:58.206415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:05.929 [2024-11-19 17:15:58.257965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.929 [2024-11-19 17:15:58.258035] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.929 [2024-11-19 17:15:58.258045] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:05.929 [2024-11-19 17:15:58.258053] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:05.929 [2024-11-19 17:15:58.258059] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.929 [2024-11-19 17:15:58.259418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.929 [2024-11-19 17:15:58.259562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.929 [2024-11-19 17:15:58.259682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:05.929 [2024-11-19 17:15:58.259687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.929 17:15:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.929 17:15:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:17:05.929 17:15:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:05.929 17:15:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:05.929 17:15:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:05.929 17:15:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.188 17:15:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:06.188 17:15:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:06.447 17:15:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:06.447 17:15:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:06.706 17:15:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:06.706 17:15:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:07.286 17:15:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:07.286 17:15:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:17:07.286 17:15:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:07.286 17:15:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:07.286 17:15:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:07.286 [2024-11-19 17:15:59.816894] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.545 17:15:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:07.804 17:16:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:07.804 17:16:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:08.063 17:16:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:08.063 17:16:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:08.323 17:16:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:08.582 [2024-11-19 17:16:01.079350] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:08.582 17:16:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:08.842 17:16:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:08.842 17:16:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:08.842 17:16:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:08.842 17:16:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:10.218 Initializing NVMe Controllers 00:17:10.218 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:10.218 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:10.218 Initialization complete. Launching workers. 00:17:10.218 ======================================================== 00:17:10.218 Latency(us) 00:17:10.218 Device Information : IOPS MiB/s Average min max 00:17:10.218 PCIE (0000:00:10.0) NSID 1 from core 0: 25024.00 97.75 1278.59 359.49 7557.86 00:17:10.218 ======================================================== 00:17:10.218 Total : 25024.00 97.75 1278.59 359.49 7557.86 00:17:10.218 00:17:10.218 17:16:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:11.595 Initializing NVMe Controllers 00:17:11.595 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:11.595 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:11.595 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:11.595 Initialization complete. Launching workers. 00:17:11.595 ======================================================== 00:17:11.595 Latency(us) 00:17:11.595 Device Information : IOPS MiB/s Average min max 00:17:11.595 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3606.98 14.09 275.78 105.56 7241.82 00:17:11.595 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8111.71 5026.27 12021.35 00:17:11.595 ======================================================== 00:17:11.595 Total : 3730.98 14.57 536.21 105.56 12021.35 00:17:11.595 00:17:11.595 17:16:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:12.971 Initializing NVMe Controllers 00:17:12.971 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:12.971 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:12.971 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:12.971 Initialization complete. Launching workers. 00:17:12.971 ======================================================== 00:17:12.971 Latency(us) 00:17:12.971 Device Information : IOPS MiB/s Average min max 00:17:12.971 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9244.30 36.11 3461.66 609.25 10324.90 00:17:12.971 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2626.72 10.26 12295.86 6777.30 27851.47 00:17:12.971 ======================================================== 00:17:12.971 Total : 11871.03 46.37 5416.42 609.25 27851.47 00:17:12.971 00:17:12.971 17:16:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:17:12.971 17:16:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:15.503 Initializing NVMe Controllers 00:17:15.503 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:15.503 Controller IO queue size 128, less than required. 00:17:15.503 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:15.503 Controller IO queue size 128, less than required. 00:17:15.503 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:15.503 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:15.503 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:15.503 Initialization complete. Launching workers. 00:17:15.503 ======================================================== 00:17:15.503 Latency(us) 00:17:15.503 Device Information : IOPS MiB/s Average min max 00:17:15.503 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1476.90 369.22 87915.29 63641.31 150293.07 00:17:15.503 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 561.51 140.38 235784.52 94153.03 386805.60 00:17:15.503 ======================================================== 00:17:15.504 Total : 2038.41 509.60 128648.13 63641.31 386805.60 00:17:15.504 00:17:15.504 17:16:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:17:15.504 Initializing NVMe Controllers 00:17:15.504 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:15.504 Controller IO queue size 128, less than required. 00:17:15.504 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:15.504 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:15.504 Controller IO queue size 128, less than required. 00:17:15.504 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:15.504 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:17:15.504 WARNING: Some requested NVMe devices were skipped 00:17:15.504 No valid NVMe controllers or AIO or URING devices found 00:17:15.763 17:16:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:17:18.295 Initializing NVMe Controllers 00:17:18.295 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:18.295 Controller IO queue size 128, less than required. 00:17:18.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:18.295 Controller IO queue size 128, less than required. 00:17:18.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:18.295 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:18.295 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:18.295 Initialization complete. Launching workers. 00:17:18.295 00:17:18.295 ==================== 00:17:18.295 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:18.295 TCP transport: 00:17:18.295 polls: 7382 00:17:18.295 idle_polls: 5026 00:17:18.295 sock_completions: 2356 00:17:18.295 nvme_completions: 4603 00:17:18.295 submitted_requests: 6848 00:17:18.295 queued_requests: 1 00:17:18.295 00:17:18.295 ==================== 00:17:18.295 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:18.295 TCP transport: 00:17:18.295 polls: 10598 00:17:18.295 idle_polls: 8157 00:17:18.295 sock_completions: 2441 00:17:18.295 nvme_completions: 5043 00:17:18.295 submitted_requests: 7598 00:17:18.295 queued_requests: 1 00:17:18.295 ======================================================== 00:17:18.296 Latency(us) 00:17:18.296 Device Information : IOPS MiB/s Average min max 00:17:18.296 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1147.38 286.84 114240.41 79818.16 181799.47 00:17:18.296 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1257.08 314.27 102527.67 48602.74 162301.98 00:17:18.296 ======================================================== 00:17:18.296 Total : 2404.46 601.11 108116.85 48602.74 181799.47 00:17:18.296 00:17:18.296 17:16:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:17:18.296 17:16:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:18.559 17:16:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:17:18.559 17:16:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:18.559 17:16:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:17:18.559 17:16:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:18.559 17:16:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:17:18.559 17:16:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:18.559 17:16:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:17:18.559 17:16:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:18.559 17:16:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:18.559 rmmod nvme_tcp 00:17:18.559 rmmod nvme_fabrics 00:17:18.559 rmmod nvme_keyring 00:17:18.559 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:18.559 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:17:18.559 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:17:18.559 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 87568 ']' 00:17:18.559 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 87568 00:17:18.559 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 87568 ']' 00:17:18.559 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 87568 00:17:18.559 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:17:18.559 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:18.559 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87568 00:17:18.559 killing process with pid 87568 00:17:18.559 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:18.559 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:18.559 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87568' 00:17:18.559 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 87568 00:17:18.559 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 87568 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:17:19.504 00:17:19.504 real 0m14.643s 00:17:19.504 user 0m52.979s 00:17:19.504 sys 0m3.956s 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.504 17:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:19.504 ************************************ 00:17:19.504 END TEST nvmf_perf 00:17:19.504 ************************************ 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.765 ************************************ 00:17:19.765 START TEST nvmf_fio_host 00:17:19.765 ************************************ 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:19.765 * Looking for test storage... 00:17:19.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:19.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.765 --rc genhtml_branch_coverage=1 00:17:19.765 --rc genhtml_function_coverage=1 00:17:19.765 --rc genhtml_legend=1 00:17:19.765 --rc geninfo_all_blocks=1 00:17:19.765 --rc geninfo_unexecuted_blocks=1 00:17:19.765 00:17:19.765 ' 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:19.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.765 --rc genhtml_branch_coverage=1 00:17:19.765 --rc genhtml_function_coverage=1 00:17:19.765 --rc genhtml_legend=1 00:17:19.765 --rc geninfo_all_blocks=1 00:17:19.765 --rc geninfo_unexecuted_blocks=1 00:17:19.765 00:17:19.765 ' 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:19.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.765 --rc genhtml_branch_coverage=1 00:17:19.765 --rc genhtml_function_coverage=1 00:17:19.765 --rc genhtml_legend=1 00:17:19.765 --rc geninfo_all_blocks=1 00:17:19.765 --rc geninfo_unexecuted_blocks=1 00:17:19.765 00:17:19.765 ' 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:19.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.765 --rc genhtml_branch_coverage=1 00:17:19.765 --rc genhtml_function_coverage=1 00:17:19.765 --rc genhtml_legend=1 00:17:19.765 --rc geninfo_all_blocks=1 00:17:19.765 --rc geninfo_unexecuted_blocks=1 00:17:19.765 00:17:19.765 ' 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:19.765 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:19.766 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:19.766 Cannot find device "nvmf_init_br" 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:19.766 Cannot find device "nvmf_init_br2" 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:19.766 Cannot find device "nvmf_tgt_br" 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:17:19.766 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:20.026 Cannot find device "nvmf_tgt_br2" 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:20.026 Cannot find device "nvmf_init_br" 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:20.026 Cannot find device "nvmf_init_br2" 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:20.026 Cannot find device "nvmf_tgt_br" 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:20.026 Cannot find device "nvmf_tgt_br2" 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:20.026 Cannot find device "nvmf_br" 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:20.026 Cannot find device "nvmf_init_if" 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:20.026 Cannot find device "nvmf_init_if2" 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:20.026 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:20.026 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:20.026 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:20.286 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:20.286 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:17:20.286 00:17:20.286 --- 10.0.0.3 ping statistics --- 00:17:20.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.286 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:20.286 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:20.286 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.079 ms 00:17:20.286 00:17:20.286 --- 10.0.0.4 ping statistics --- 00:17:20.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.286 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:20.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:17:20.286 00:17:20.286 --- 10.0.0.1 ping statistics --- 00:17:20.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.286 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:20.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:17:20.286 00:17:20.286 --- 10.0.0.2 ping statistics --- 00:17:20.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.286 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=88097 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 88097 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 88097 ']' 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:20.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:20.286 17:16:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.286 [2024-11-19 17:16:12.721537] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:17:20.286 [2024-11-19 17:16:12.721645] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.544 [2024-11-19 17:16:12.876291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:20.544 [2024-11-19 17:16:12.934456] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.544 [2024-11-19 17:16:12.934542] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.544 [2024-11-19 17:16:12.934567] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.544 [2024-11-19 17:16:12.934578] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.544 [2024-11-19 17:16:12.934587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.544 [2024-11-19 17:16:12.936108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.544 [2024-11-19 17:16:12.936248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.544 [2024-11-19 17:16:12.936309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:20.544 [2024-11-19 17:16:12.936315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.803 17:16:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.803 17:16:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:17:20.803 17:16:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:21.061 [2024-11-19 17:16:13.394633] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:21.061 17:16:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:17:21.061 17:16:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:21.061 17:16:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.061 17:16:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:21.319 Malloc1 00:17:21.319 17:16:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:21.577 17:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:21.836 17:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:22.094 [2024-11-19 17:16:14.479797] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:22.094 17:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:22.353 17:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:17:22.353 17:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:22.353 17:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:22.353 17:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:22.353 17:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:22.353 17:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:22.353 17:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:22.353 17:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:17:22.353 17:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:22.353 17:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:22.353 17:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:22.353 17:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:17:22.353 17:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:22.353 17:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:22.353 17:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:22.353 17:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:22.353 17:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:22.353 17:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:22.353 17:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:22.353 17:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:22.353 17:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:22.353 17:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:22.353 17:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:22.612 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:22.612 fio-3.35 00:17:22.612 Starting 1 thread 00:17:25.143 00:17:25.143 test: (groupid=0, jobs=1): err= 0: pid=88209: Tue Nov 19 17:16:17 2024 00:17:25.143 read: IOPS=10.1k, BW=39.3MiB/s (41.3MB/s)(78.9MiB/2006msec) 00:17:25.143 slat (nsec): min=1702, max=369082, avg=2484.60, stdev=3671.08 00:17:25.143 clat (usec): min=3576, max=12384, avg=6694.80, stdev=689.64 00:17:25.143 lat (usec): min=3644, max=12388, avg=6697.29, stdev=689.81 00:17:25.143 clat percentiles (usec): 00:17:25.143 | 1.00th=[ 5604], 5.00th=[ 5932], 10.00th=[ 6063], 20.00th=[ 6259], 00:17:25.143 | 30.00th=[ 6390], 40.00th=[ 6521], 50.00th=[ 6587], 60.00th=[ 6718], 00:17:25.143 | 70.00th=[ 6849], 80.00th=[ 6980], 90.00th=[ 7308], 95.00th=[ 7701], 00:17:25.143 | 99.00th=[ 9896], 99.50th=[10814], 99.90th=[11600], 99.95th=[11863], 00:17:25.143 | 99.99th=[12256] 00:17:25.143 bw ( KiB/s): min=38760, max=40960, per=99.92%, avg=40260.00, stdev=1012.74, samples=4 00:17:25.143 iops : min= 9690, max=10240, avg=10065.00, stdev=253.19, samples=4 00:17:25.143 write: IOPS=10.1k, BW=39.4MiB/s (41.3MB/s)(79.0MiB/2006msec); 0 zone resets 00:17:25.143 slat (nsec): min=1743, max=280339, avg=2539.86, stdev=2688.69 00:17:25.143 clat (usec): min=2548, max=11029, avg=5964.89, stdev=462.44 00:17:25.143 lat (usec): min=2562, max=11031, avg=5967.43, stdev=462.52 00:17:25.143 clat percentiles (usec): 00:17:25.143 | 1.00th=[ 4883], 5.00th=[ 5276], 10.00th=[ 5473], 20.00th=[ 5669], 00:17:25.143 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 6063], 00:17:25.143 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6652], 00:17:25.143 | 99.00th=[ 7439], 99.50th=[ 7635], 99.90th=[ 8455], 99.95th=[ 9241], 00:17:25.143 | 99.99th=[10421] 00:17:25.143 bw ( KiB/s): min=39200, max=40960, per=100.00%, avg=40328.00, stdev=780.51, samples=4 00:17:25.143 iops : min= 9800, max=10240, avg=10082.00, stdev=195.13, samples=4 00:17:25.143 lat (msec) : 4=0.10%, 10=99.41%, 20=0.49% 00:17:25.143 cpu : usr=65.94%, sys=24.64%, ctx=12, majf=0, minf=7 00:17:25.143 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:25.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:25.143 issued rwts: total=20207,20214,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:25.143 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:25.143 00:17:25.143 Run status group 0 (all jobs): 00:17:25.143 READ: bw=39.3MiB/s (41.3MB/s), 39.3MiB/s-39.3MiB/s (41.3MB/s-41.3MB/s), io=78.9MiB (82.8MB), run=2006-2006msec 00:17:25.143 WRITE: bw=39.4MiB/s (41.3MB/s), 39.4MiB/s-39.4MiB/s (41.3MB/s-41.3MB/s), io=79.0MiB (82.8MB), run=2006-2006msec 00:17:25.143 17:16:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:25.143 17:16:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:25.143 17:16:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:25.143 17:16:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:25.143 17:16:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:25.143 17:16:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:25.143 17:16:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:17:25.143 17:16:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:25.143 17:16:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:25.143 17:16:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:25.143 17:16:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:25.143 17:16:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:17:25.143 17:16:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:25.143 17:16:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:25.143 17:16:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:25.143 17:16:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:25.143 17:16:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:25.143 17:16:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:25.143 17:16:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:25.143 17:16:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:25.143 17:16:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:25.143 17:16:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:25.143 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:17:25.143 fio-3.35 00:17:25.143 Starting 1 thread 00:17:27.674 00:17:27.674 test: (groupid=0, jobs=1): err= 0: pid=88258: Tue Nov 19 17:16:19 2024 00:17:27.674 read: IOPS=7920, BW=124MiB/s (130MB/s)(248MiB/2006msec) 00:17:27.674 slat (usec): min=2, max=118, avg= 3.85, stdev= 2.43 00:17:27.674 clat (usec): min=2688, max=18152, avg=9574.53, stdev=2398.77 00:17:27.674 lat (usec): min=2692, max=18155, avg=9578.39, stdev=2398.82 00:17:27.674 clat percentiles (usec): 00:17:27.674 | 1.00th=[ 4817], 5.00th=[ 5932], 10.00th=[ 6587], 20.00th=[ 7373], 00:17:27.674 | 30.00th=[ 7963], 40.00th=[ 8717], 50.00th=[ 9503], 60.00th=[10421], 00:17:27.674 | 70.00th=[11076], 80.00th=[11600], 90.00th=[12387], 95.00th=[13304], 00:17:27.674 | 99.00th=[15926], 99.50th=[16909], 99.90th=[17957], 99.95th=[17957], 00:17:27.674 | 99.99th=[18220] 00:17:27.674 bw ( KiB/s): min=61344, max=67232, per=51.28%, avg=64984.00, stdev=2756.76, samples=4 00:17:27.674 iops : min= 3834, max= 4202, avg=4061.50, stdev=172.30, samples=4 00:17:27.674 write: IOPS=4752, BW=74.3MiB/s (77.9MB/s)(133MiB/1797msec); 0 zone resets 00:17:27.674 slat (usec): min=30, max=334, avg=38.89, stdev= 9.10 00:17:27.674 clat (usec): min=3658, max=21123, avg=11566.03, stdev=2147.90 00:17:27.674 lat (usec): min=3694, max=21157, avg=11604.92, stdev=2148.55 00:17:27.674 clat percentiles (usec): 00:17:27.674 | 1.00th=[ 7439], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9765], 00:17:27.674 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11338], 60.00th=[11863], 00:17:27.674 | 70.00th=[12387], 80.00th=[13173], 90.00th=[14615], 95.00th=[15533], 00:17:27.674 | 99.00th=[17433], 99.50th=[17957], 99.90th=[19792], 99.95th=[20317], 00:17:27.674 | 99.99th=[21103] 00:17:27.674 bw ( KiB/s): min=64992, max=70336, per=89.27%, avg=67880.00, stdev=2682.96, samples=4 00:17:27.674 iops : min= 4062, max= 4396, avg=4242.50, stdev=167.69, samples=4 00:17:27.674 lat (msec) : 4=0.20%, 10=44.45%, 20=55.31%, 50=0.03% 00:17:27.674 cpu : usr=72.52%, sys=18.15%, ctx=6, majf=0, minf=24 00:17:27.674 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:17:27.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:27.674 issued rwts: total=15888,8540,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.674 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.674 00:17:27.674 Run status group 0 (all jobs): 00:17:27.674 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=248MiB (260MB), run=2006-2006msec 00:17:27.674 WRITE: bw=74.3MiB/s (77.9MB/s), 74.3MiB/s-74.3MiB/s (77.9MB/s-77.9MB/s), io=133MiB (140MB), run=1797-1797msec 00:17:27.674 17:16:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:27.674 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:17:27.674 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:17:27.674 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:17:27.674 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:17:27.674 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:27.674 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:17:27.933 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:27.933 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:17:27.933 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:27.933 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:27.933 rmmod nvme_tcp 00:17:27.933 rmmod nvme_fabrics 00:17:27.933 rmmod nvme_keyring 00:17:27.933 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:27.933 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:17:27.933 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:17:27.933 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 88097 ']' 00:17:27.933 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 88097 00:17:27.933 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 88097 ']' 00:17:27.933 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 88097 00:17:27.933 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:17:27.933 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:27.933 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88097 00:17:27.933 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:27.933 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:27.933 killing process with pid 88097 00:17:27.933 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88097' 00:17:27.933 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 88097 00:17:27.933 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 88097 00:17:28.191 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:28.191 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:28.191 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:28.191 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:17:28.191 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:17:28.191 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:17:28.191 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:28.191 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:28.191 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:28.191 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:28.191 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:28.191 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:28.191 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:28.191 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:28.191 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:28.191 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:28.191 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:28.191 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:28.191 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:28.450 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:28.450 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:28.450 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:28.450 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:28.450 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.450 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:28.450 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.450 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:17:28.450 00:17:28.450 real 0m8.802s 00:17:28.450 user 0m34.701s 00:17:28.450 sys 0m2.496s 00:17:28.450 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:28.450 17:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.450 ************************************ 00:17:28.450 END TEST nvmf_fio_host 00:17:28.450 ************************************ 00:17:28.450 17:16:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:28.450 17:16:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:28.450 17:16:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:28.450 17:16:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.450 ************************************ 00:17:28.450 START TEST nvmf_failover 00:17:28.450 ************************************ 00:17:28.450 17:16:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:28.450 * Looking for test storage... 00:17:28.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:28.710 17:16:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:28.710 17:16:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:17:28.710 17:16:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:28.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.710 --rc genhtml_branch_coverage=1 00:17:28.710 --rc genhtml_function_coverage=1 00:17:28.710 --rc genhtml_legend=1 00:17:28.710 --rc geninfo_all_blocks=1 00:17:28.710 --rc geninfo_unexecuted_blocks=1 00:17:28.710 00:17:28.710 ' 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:28.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.710 --rc genhtml_branch_coverage=1 00:17:28.710 --rc genhtml_function_coverage=1 00:17:28.710 --rc genhtml_legend=1 00:17:28.710 --rc geninfo_all_blocks=1 00:17:28.710 --rc geninfo_unexecuted_blocks=1 00:17:28.710 00:17:28.710 ' 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:28.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.710 --rc genhtml_branch_coverage=1 00:17:28.710 --rc genhtml_function_coverage=1 00:17:28.710 --rc genhtml_legend=1 00:17:28.710 --rc geninfo_all_blocks=1 00:17:28.710 --rc geninfo_unexecuted_blocks=1 00:17:28.710 00:17:28.710 ' 00:17:28.710 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:28.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.711 --rc genhtml_branch_coverage=1 00:17:28.711 --rc genhtml_function_coverage=1 00:17:28.711 --rc genhtml_legend=1 00:17:28.711 --rc geninfo_all_blocks=1 00:17:28.711 --rc geninfo_unexecuted_blocks=1 00:17:28.711 00:17:28.711 ' 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:28.711 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:28.711 Cannot find device "nvmf_init_br" 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:28.711 Cannot find device "nvmf_init_br2" 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:28.711 Cannot find device "nvmf_tgt_br" 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:28.711 Cannot find device "nvmf_tgt_br2" 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:28.711 Cannot find device "nvmf_init_br" 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:28.711 Cannot find device "nvmf_init_br2" 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:28.711 Cannot find device "nvmf_tgt_br" 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:28.711 Cannot find device "nvmf_tgt_br2" 00:17:28.711 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:17:28.712 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:28.712 Cannot find device "nvmf_br" 00:17:28.712 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:17:28.712 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:28.970 Cannot find device "nvmf_init_if" 00:17:28.970 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:17:28.970 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:28.970 Cannot find device "nvmf_init_if2" 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:28.971 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:28.971 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:28.971 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:29.230 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:29.230 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:17:29.230 00:17:29.230 --- 10.0.0.3 ping statistics --- 00:17:29.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.230 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:29.230 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:29.230 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:17:29.230 00:17:29.230 --- 10.0.0.4 ping statistics --- 00:17:29.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.230 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:29.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:29.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:17:29.230 00:17:29.230 --- 10.0.0.1 ping statistics --- 00:17:29.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.230 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:29.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:29.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:17:29.230 00:17:29.230 --- 10.0.0.2 ping statistics --- 00:17:29.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.230 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=88527 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 88527 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 88527 ']' 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:29.230 17:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:29.230 [2024-11-19 17:16:21.652001] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:17:29.230 [2024-11-19 17:16:21.652105] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.489 [2024-11-19 17:16:21.806336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:29.489 [2024-11-19 17:16:21.872483] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:29.489 [2024-11-19 17:16:21.872558] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:29.489 [2024-11-19 17:16:21.872572] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:29.489 [2024-11-19 17:16:21.872583] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:29.489 [2024-11-19 17:16:21.872592] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:29.489 [2024-11-19 17:16:21.873917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:29.489 [2024-11-19 17:16:21.874060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:29.489 [2024-11-19 17:16:21.874069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.424 17:16:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.424 17:16:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:17:30.424 17:16:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:30.424 17:16:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:30.424 17:16:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:30.424 17:16:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.424 17:16:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:30.682 [2024-11-19 17:16:23.016506] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.682 17:16:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:30.940 Malloc0 00:17:30.940 17:16:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:31.214 17:16:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:31.479 17:16:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:31.738 [2024-11-19 17:16:24.221380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:31.738 17:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:31.997 [2024-11-19 17:16:24.477485] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:31.997 17:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:17:32.254 [2024-11-19 17:16:24.769729] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:17:32.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:32.512 17:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=88641 00:17:32.512 17:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:17:32.512 17:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:32.512 17:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 88641 /var/tmp/bdevperf.sock 00:17:32.512 17:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 88641 ']' 00:17:32.512 17:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:32.512 17:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:32.512 17:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:32.513 17:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:32.513 17:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:32.771 17:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:32.771 17:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:17:32.771 17:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:33.028 NVMe0n1 00:17:33.028 17:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:33.594 00:17:33.594 17:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=88680 00:17:33.594 17:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:33.594 17:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:17:34.530 17:16:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:34.788 17:16:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:17:38.073 17:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:38.073 00:17:38.073 17:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:38.333 [2024-11-19 17:16:30.781200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 [2024-11-19 17:16:30.781752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c600 is same with the state(6) to be set 00:17:38.333 17:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:17:41.620 17:16:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:41.620 [2024-11-19 17:16:34.130770] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:41.879 17:16:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:17:42.816 17:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:17:43.102 [2024-11-19 17:16:35.492270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.102 [2024-11-19 17:16:35.492941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.492950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.492958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.492966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.492974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.492983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.492991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.492999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 [2024-11-19 17:16:35.493348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6930 is same with the state(6) to be set 00:17:43.103 17:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 88680 00:17:49.690 { 00:17:49.690 "results": [ 00:17:49.690 { 00:17:49.690 "job": "NVMe0n1", 00:17:49.690 "core_mask": "0x1", 00:17:49.690 "workload": "verify", 00:17:49.690 "status": "finished", 00:17:49.690 "verify_range": { 00:17:49.690 "start": 0, 00:17:49.690 "length": 16384 00:17:49.690 }, 00:17:49.690 "queue_depth": 128, 00:17:49.690 "io_size": 4096, 00:17:49.690 "runtime": 15.009584, 00:17:49.690 "iops": 8738.683230661156, 00:17:49.690 "mibps": 34.13548136977014, 00:17:49.690 "io_failed": 3637, 00:17:49.690 "io_timeout": 0, 00:17:49.690 "avg_latency_us": 14223.03137357357, 00:17:49.690 "min_latency_us": 826.6472727272727, 00:17:49.690 "max_latency_us": 52905.42545454545 00:17:49.690 } 00:17:49.690 ], 00:17:49.690 "core_count": 1 00:17:49.690 } 00:17:49.690 17:16:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 88641 00:17:49.690 17:16:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 88641 ']' 00:17:49.690 17:16:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 88641 00:17:49.690 17:16:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:17:49.690 17:16:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:49.690 17:16:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88641 00:17:49.690 killing process with pid 88641 00:17:49.690 17:16:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:49.690 17:16:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:49.690 17:16:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88641' 00:17:49.690 17:16:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 88641 00:17:49.690 17:16:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 88641 00:17:49.690 17:16:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:49.690 [2024-11-19 17:16:24.855256] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:17:49.690 [2024-11-19 17:16:24.855388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88641 ] 00:17:49.690 [2024-11-19 17:16:25.007177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.690 [2024-11-19 17:16:25.073697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.690 Running I/O for 15 seconds... 00:17:49.690 8300.00 IOPS, 32.42 MiB/s [2024-11-19T17:16:42.223Z] [2024-11-19 17:16:27.183030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.690 [2024-11-19 17:16:27.183098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.690 [2024-11-19 17:16:27.183135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.690 [2024-11-19 17:16:27.183158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.690 [2024-11-19 17:16:27.183174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.690 [2024-11-19 17:16:27.183187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.690 [2024-11-19 17:16:27.183201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.690 [2024-11-19 17:16:27.183215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.690 [2024-11-19 17:16:27.183229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.690 [2024-11-19 17:16:27.183243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.690 [2024-11-19 17:16:27.183258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.690 [2024-11-19 17:16:27.183271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.690 [2024-11-19 17:16:27.183285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.690 [2024-11-19 17:16:27.183299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.690 [2024-11-19 17:16:27.183314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.690 [2024-11-19 17:16:27.183328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.690 [2024-11-19 17:16:27.183343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.690 [2024-11-19 17:16:27.183356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.690 [2024-11-19 17:16:27.183370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.690 [2024-11-19 17:16:27.183383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.183398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.691 [2024-11-19 17:16:27.183411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.183495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.691 [2024-11-19 17:16:27.183511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.183526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.691 [2024-11-19 17:16:27.183540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.183555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.691 [2024-11-19 17:16:27.183579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.183594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.691 [2024-11-19 17:16:27.183607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.183622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.691 [2024-11-19 17:16:27.183635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.183650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.691 [2024-11-19 17:16:27.183663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.183679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.691 [2024-11-19 17:16:27.183692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.183707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.691 [2024-11-19 17:16:27.183720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.183735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.691 [2024-11-19 17:16:27.183749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.183779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.691 [2024-11-19 17:16:27.183823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.183838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.691 [2024-11-19 17:16:27.183851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.183865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.691 [2024-11-19 17:16:27.183878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.183891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.691 [2024-11-19 17:16:27.183913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.183927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.691 [2024-11-19 17:16:27.183940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.183955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.691 [2024-11-19 17:16:27.183993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.184010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.691 [2024-11-19 17:16:27.184025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.184041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.691 [2024-11-19 17:16:27.184055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.184070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.691 [2024-11-19 17:16:27.184083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.184098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.691 [2024-11-19 17:16:27.184117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.184133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.691 [2024-11-19 17:16:27.184146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.184161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.691 [2024-11-19 17:16:27.184174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.184190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.691 [2024-11-19 17:16:27.184203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.184219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.691 [2024-11-19 17:16:27.184233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.184248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.691 [2024-11-19 17:16:27.184261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.184277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.691 [2024-11-19 17:16:27.184290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.184342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.691 [2024-11-19 17:16:27.184355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.184369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.691 [2024-11-19 17:16:27.184382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.184396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.691 [2024-11-19 17:16:27.184408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.184433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.691 [2024-11-19 17:16:27.184445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.184459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.691 [2024-11-19 17:16:27.184472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.184487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.691 [2024-11-19 17:16:27.184500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.184513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.691 [2024-11-19 17:16:27.184526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.184540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.691 [2024-11-19 17:16:27.184553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.184567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.691 [2024-11-19 17:16:27.184580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.184594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.691 [2024-11-19 17:16:27.184612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.184627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.691 [2024-11-19 17:16:27.184640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.184655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.691 [2024-11-19 17:16:27.184667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.184681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.691 [2024-11-19 17:16:27.184705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.691 [2024-11-19 17:16:27.184721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.691 [2024-11-19 17:16:27.184734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.184748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.692 [2024-11-19 17:16:27.184761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.184775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.692 [2024-11-19 17:16:27.184798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.184813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.692 [2024-11-19 17:16:27.184826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.184841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.692 [2024-11-19 17:16:27.184853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.184867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.692 [2024-11-19 17:16:27.184880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.184894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.692 [2024-11-19 17:16:27.184906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.184920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.692 [2024-11-19 17:16:27.184933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.184947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.692 [2024-11-19 17:16:27.184961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.184975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.692 [2024-11-19 17:16:27.184988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.692 [2024-11-19 17:16:27.185014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.692 [2024-11-19 17:16:27.185041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.692 [2024-11-19 17:16:27.185082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.692 [2024-11-19 17:16:27.185110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.692 [2024-11-19 17:16:27.185142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.692 [2024-11-19 17:16:27.185168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.692 [2024-11-19 17:16:27.185195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.692 [2024-11-19 17:16:27.185222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.692 [2024-11-19 17:16:27.185250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.692 [2024-11-19 17:16:27.185276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.692 [2024-11-19 17:16:27.185303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.692 [2024-11-19 17:16:27.185330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.692 [2024-11-19 17:16:27.185356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.692 [2024-11-19 17:16:27.185382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.692 [2024-11-19 17:16:27.185409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.692 [2024-11-19 17:16:27.185441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.692 [2024-11-19 17:16:27.185469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.692 [2024-11-19 17:16:27.185496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.692 [2024-11-19 17:16:27.185535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.692 [2024-11-19 17:16:27.185562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.692 [2024-11-19 17:16:27.185590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.692 [2024-11-19 17:16:27.185617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.692 [2024-11-19 17:16:27.185655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.692 [2024-11-19 17:16:27.185683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.692 [2024-11-19 17:16:27.185710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.692 [2024-11-19 17:16:27.185746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.692 [2024-11-19 17:16:27.185777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.692 [2024-11-19 17:16:27.185811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.692 [2024-11-19 17:16:27.185841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.692 [2024-11-19 17:16:27.185868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.692 [2024-11-19 17:16:27.185882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.693 [2024-11-19 17:16:27.185895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.185919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.693 [2024-11-19 17:16:27.185933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.185946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.693 [2024-11-19 17:16:27.185960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.185974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.693 [2024-11-19 17:16:27.185986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.693 [2024-11-19 17:16:27.186026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.693 [2024-11-19 17:16:27.186055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.693 [2024-11-19 17:16:27.186082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.693 [2024-11-19 17:16:27.186979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.693 [2024-11-19 17:16:27.186993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f27670 is same with the state(6) to be set 00:17:49.693 [2024-11-19 17:16:27.187008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.693 [2024-11-19 17:16:27.187019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.693 [2024-11-19 17:16:27.187029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78536 len:8 PRP1 0x0 PRP2 0x0 00:17:49.693 [2024-11-19 17:16:27.187041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:27.187108] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:17:49.694 [2024-11-19 17:16:27.187166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.694 [2024-11-19 17:16:27.187186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:27.187200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.694 [2024-11-19 17:16:27.187212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:27.187225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.694 [2024-11-19 17:16:27.187237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:27.187250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.694 [2024-11-19 17:16:27.187262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:27.187274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:17:49.694 [2024-11-19 17:16:27.190659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:49.694 [2024-11-19 17:16:27.190695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ebaf30 (9): Bad file descriptor 00:17:49.694 [2024-11-19 17:16:27.218962] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:17:49.694 8557.00 IOPS, 33.43 MiB/s [2024-11-19T17:16:42.227Z] 8848.00 IOPS, 34.56 MiB/s [2024-11-19T17:16:42.227Z] 8954.75 IOPS, 34.98 MiB/s [2024-11-19T17:16:42.227Z] [2024-11-19 17:16:30.782808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.694 [2024-11-19 17:16:30.782884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:30.782907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.694 [2024-11-19 17:16:30.782955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:30.782970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.694 [2024-11-19 17:16:30.782982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:30.782995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.694 [2024-11-19 17:16:30.783007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:30.783020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.694 [2024-11-19 17:16:30.783033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:30.783046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.694 [2024-11-19 17:16:30.783057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:30.783070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.694 [2024-11-19 17:16:30.783081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:30.783094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.694 [2024-11-19 17:16:30.783105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:30.783118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.694 [2024-11-19 17:16:30.783130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:30.783143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.694 [2024-11-19 17:16:30.783154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:30.783167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.694 [2024-11-19 17:16:30.783178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:30.783191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.694 [2024-11-19 17:16:30.783202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:30.783214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.694 [2024-11-19 17:16:30.783226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:30.783238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.694 [2024-11-19 17:16:30.783250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:30.783271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.694 [2024-11-19 17:16:30.783283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:30.783297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.694 [2024-11-19 17:16:30.783308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:30.783321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.694 [2024-11-19 17:16:30.783335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:30.783349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.694 [2024-11-19 17:16:30.783361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:30.783373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.694 [2024-11-19 17:16:30.783385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:30.783398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.694 [2024-11-19 17:16:30.783410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:30.783423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.694 [2024-11-19 17:16:30.783435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:30.783447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.694 [2024-11-19 17:16:30.783460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:30.783472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.694 [2024-11-19 17:16:30.783484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:30.783497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.694 [2024-11-19 17:16:30.783508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:30.783521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.694 [2024-11-19 17:16:30.783532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:30.783545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.694 [2024-11-19 17:16:30.783557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:30.783570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.694 [2024-11-19 17:16:30.783590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.694 [2024-11-19 17:16:30.783605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.694 [2024-11-19 17:16:30.783617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.783630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.783641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.783654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.783666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.783678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.783691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.783704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.783716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.783729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.783764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.783780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.783792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.783805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.783816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.783829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.783841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.783854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.783866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.783879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.783891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.783904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.783916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.783929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.783949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.783963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.783984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.783998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:92488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.784010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.784023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.784034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.784047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.784059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.784072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.784084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.784097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.784108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.784121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.784134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.784147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.784159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.784172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.784184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.784197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.784208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.784222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.784233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.784245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.784257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.784281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.784294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.784306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.784318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.784331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.784342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.784355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.784367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.784379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.784391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.784404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.784415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.784428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.784439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.784452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.784463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.784476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.784487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.784500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.784512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.784524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.784538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.784551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.784562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.784575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.695 [2024-11-19 17:16:30.784593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.695 [2024-11-19 17:16:30.784606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.696 [2024-11-19 17:16:30.784618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.784631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.696 [2024-11-19 17:16:30.784643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.784656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.696 [2024-11-19 17:16:30.784668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.784681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.696 [2024-11-19 17:16:30.784693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.784706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.696 [2024-11-19 17:16:30.784718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.784732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.696 [2024-11-19 17:16:30.784763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.784778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.696 [2024-11-19 17:16:30.784790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.784803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.696 [2024-11-19 17:16:30.784816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.784829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.784841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.784853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.784866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.784879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.784890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.784904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.784916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.784935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.784947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.784960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.784973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.784985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.784997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.785009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.785025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.785055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:93000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.785068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.785081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.785093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.785107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.785119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.785132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.785144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.785157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.785169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.785182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.785194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.785207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.785219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.785232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.785245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.785258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.785276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.785289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.785301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.785316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.785328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.785341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.785354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.785367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.785379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.785407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.785418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.785430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.785442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.785454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.785469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.785483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.785495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.785507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.785519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.785531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.785543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.785556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.785567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.785580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.785592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.785611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.785628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.785642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.785654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.785667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.696 [2024-11-19 17:16:30.785678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.696 [2024-11-19 17:16:30.785706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.697 [2024-11-19 17:16:30.785720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93192 len:8 PRP1 0x0 PRP2 0x0 00:17:49.697 [2024-11-19 17:16:30.785732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.697 [2024-11-19 17:16:30.785757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.697 [2024-11-19 17:16:30.785792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.697 [2024-11-19 17:16:30.785805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93200 len:8 PRP1 0x0 PRP2 0x0 00:17:49.697 [2024-11-19 17:16:30.785816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.697 [2024-11-19 17:16:30.785828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.697 [2024-11-19 17:16:30.785837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.697 [2024-11-19 17:16:30.785845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93208 len:8 PRP1 0x0 PRP2 0x0 00:17:49.697 [2024-11-19 17:16:30.785856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.697 [2024-11-19 17:16:30.785867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.697 [2024-11-19 17:16:30.785876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.697 [2024-11-19 17:16:30.785885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93216 len:8 PRP1 0x0 PRP2 0x0 00:17:49.697 [2024-11-19 17:16:30.785896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.697 [2024-11-19 17:16:30.785915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.697 [2024-11-19 17:16:30.785924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.697 [2024-11-19 17:16:30.785933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93224 len:8 PRP1 0x0 PRP2 0x0 00:17:49.697 [2024-11-19 17:16:30.785944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.697 [2024-11-19 17:16:30.785955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.697 [2024-11-19 17:16:30.785964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.697 [2024-11-19 17:16:30.785972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93232 len:8 PRP1 0x0 PRP2 0x0 00:17:49.697 [2024-11-19 17:16:30.785983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.697 [2024-11-19 17:16:30.785994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.697 [2024-11-19 17:16:30.786011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.697 [2024-11-19 17:16:30.786021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93240 len:8 PRP1 0x0 PRP2 0x0 00:17:49.697 [2024-11-19 17:16:30.786032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.697 [2024-11-19 17:16:30.786044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.697 [2024-11-19 17:16:30.786052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.697 [2024-11-19 17:16:30.786060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93248 len:8 PRP1 0x0 PRP2 0x0 00:17:49.697 [2024-11-19 17:16:30.786072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.697 [2024-11-19 17:16:30.786083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.697 [2024-11-19 17:16:30.786091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.697 [2024-11-19 17:16:30.786100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93256 len:8 PRP1 0x0 PRP2 0x0 00:17:49.697 [2024-11-19 17:16:30.786115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.697 [2024-11-19 17:16:30.786127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.697 [2024-11-19 17:16:30.786142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.697 [2024-11-19 17:16:30.786151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93264 len:8 PRP1 0x0 PRP2 0x0 00:17:49.697 [2024-11-19 17:16:30.786163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.697 [2024-11-19 17:16:30.786174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.697 [2024-11-19 17:16:30.786182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.697 [2024-11-19 17:16:30.786191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93272 len:8 PRP1 0x0 PRP2 0x0 00:17:49.697 [2024-11-19 17:16:30.786203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.697 [2024-11-19 17:16:30.786214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.697 [2024-11-19 17:16:30.786224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.697 [2024-11-19 17:16:30.786245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93280 len:8 PRP1 0x0 PRP2 0x0 00:17:49.697 [2024-11-19 17:16:30.786256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.697 [2024-11-19 17:16:30.786273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.697 [2024-11-19 17:16:30.786282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.697 [2024-11-19 17:16:30.786291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93288 len:8 PRP1 0x0 PRP2 0x0 00:17:49.697 [2024-11-19 17:16:30.786314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.697 [2024-11-19 17:16:30.786326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.697 [2024-11-19 17:16:30.786334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.697 [2024-11-19 17:16:30.786343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93296 len:8 PRP1 0x0 PRP2 0x0 00:17:49.697 [2024-11-19 17:16:30.786354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.697 [2024-11-19 17:16:30.786371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.697 [2024-11-19 17:16:30.786381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.697 [2024-11-19 17:16:30.786389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93304 len:8 PRP1 0x0 PRP2 0x0 00:17:49.697 [2024-11-19 17:16:30.786401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.697 [2024-11-19 17:16:30.786412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.697 [2024-11-19 17:16:30.786421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.697 [2024-11-19 17:16:30.786429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93312 len:8 PRP1 0x0 PRP2 0x0 00:17:49.697 [2024-11-19 17:16:30.786441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.697 [2024-11-19 17:16:30.786452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.697 [2024-11-19 17:16:30.786461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.697 [2024-11-19 17:16:30.786470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93320 len:8 PRP1 0x0 PRP2 0x0 00:17:49.697 [2024-11-19 17:16:30.786481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.697 [2024-11-19 17:16:30.786492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.697 [2024-11-19 17:16:30.786507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.697 [2024-11-19 17:16:30.786517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93328 len:8 PRP1 0x0 PRP2 0x0 00:17:49.697 [2024-11-19 17:16:30.786528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.697 [2024-11-19 17:16:30.786540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.697 [2024-11-19 17:16:30.786557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.697 [2024-11-19 17:16:30.786566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93336 len:8 PRP1 0x0 PRP2 0x0 00:17:49.697 [2024-11-19 17:16:30.786577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.697 [2024-11-19 17:16:30.786590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.697 [2024-11-19 17:16:30.786598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.697 [2024-11-19 17:16:30.786607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93344 len:8 PRP1 0x0 PRP2 0x0 00:17:49.697 [2024-11-19 17:16:30.786619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.697 [2024-11-19 17:16:30.786635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.697 [2024-11-19 17:16:30.786645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.697 [2024-11-19 17:16:30.786654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93352 len:8 PRP1 0x0 PRP2 0x0 00:17:49.697 [2024-11-19 17:16:30.786665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.697 [2024-11-19 17:16:30.786677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.697 [2024-11-19 17:16:30.786685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.697 [2024-11-19 17:16:30.786694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93360 len:8 PRP1 0x0 PRP2 0x0 00:17:49.697 [2024-11-19 17:16:30.786711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.697 [2024-11-19 17:16:30.786723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.697 [2024-11-19 17:16:30.786732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.697 [2024-11-19 17:16:30.786750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93368 len:8 PRP1 0x0 PRP2 0x0 00:17:49.697 [2024-11-19 17:16:30.786763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.697 [2024-11-19 17:16:30.786829] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:17:49.697 [2024-11-19 17:16:30.786879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.698 [2024-11-19 17:16:30.786898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:30.786912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.698 [2024-11-19 17:16:30.786923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:30.786935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.698 [2024-11-19 17:16:30.786947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:30.786958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.698 [2024-11-19 17:16:30.786969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:30.786988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:17:49.698 [2024-11-19 17:16:30.787034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ebaf30 (9): Bad file descriptor 00:17:49.698 [2024-11-19 17:16:30.790015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:17:49.698 [2024-11-19 17:16:30.813829] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:17:49.698 8965.20 IOPS, 35.02 MiB/s [2024-11-19T17:16:42.231Z] 9113.33 IOPS, 35.60 MiB/s [2024-11-19T17:16:42.231Z] 9072.43 IOPS, 35.44 MiB/s [2024-11-19T17:16:42.231Z] 8997.00 IOPS, 35.14 MiB/s [2024-11-19T17:16:42.231Z] 8944.00 IOPS, 34.94 MiB/s [2024-11-19T17:16:42.231Z] [2024-11-19 17:16:35.493628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.493702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.493729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.493745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.493760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.493789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.493805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.493866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.493935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.493952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.493968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.493983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.494007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.494029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.494051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.494071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.494093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.494112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.494135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:41976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.494155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.494177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.494196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.494218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.494238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.494260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.494280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.494301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.494321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.494345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.494364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.494386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.494406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.494428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.494462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.494487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.494509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.494537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.494554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.494570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.494585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.494603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.494618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.494634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:42072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.494649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.494667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.494681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.494698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.494712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.494728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.494744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.494760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:42104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.494789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.494807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.494822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.494838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.494853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.494869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.494884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.494928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.494944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.494960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.494975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.494991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.495006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.495022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.698 [2024-11-19 17:16:35.495038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.698 [2024-11-19 17:16:35.495055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.699 [2024-11-19 17:16:35.495070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.495088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.699 [2024-11-19 17:16:35.495102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.495118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.699 [2024-11-19 17:16:35.495133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.495150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.699 [2024-11-19 17:16:35.495165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.495182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.699 [2024-11-19 17:16:35.495207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.495229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.699 [2024-11-19 17:16:35.495245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.495262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.699 [2024-11-19 17:16:35.495276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.495293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.699 [2024-11-19 17:16:35.495307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.495325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.699 [2024-11-19 17:16:35.495348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.495365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.699 [2024-11-19 17:16:35.495380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.495397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.699 [2024-11-19 17:16:35.495411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.495428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.699 [2024-11-19 17:16:35.495442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.495459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.699 [2024-11-19 17:16:35.495474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.495490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.699 [2024-11-19 17:16:35.495505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.495522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.699 [2024-11-19 17:16:35.495537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.495554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.699 [2024-11-19 17:16:35.495570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.495587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.699 [2024-11-19 17:16:35.495602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.495618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.699 [2024-11-19 17:16:35.495632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.495649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.699 [2024-11-19 17:16:35.495664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.495680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.699 [2024-11-19 17:16:35.495724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.495754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:42328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.699 [2024-11-19 17:16:35.495782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.495815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.699 [2024-11-19 17:16:35.495830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.495844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.699 [2024-11-19 17:16:35.495857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.495872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.699 [2024-11-19 17:16:35.495885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.495899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.699 [2024-11-19 17:16:35.495912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.495926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:42440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.699 [2024-11-19 17:16:35.495939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.495953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.699 [2024-11-19 17:16:35.495966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.496006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.699 [2024-11-19 17:16:35.496022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.496039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.699 [2024-11-19 17:16:35.496054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.496071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.699 [2024-11-19 17:16:35.496085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.496102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.699 [2024-11-19 17:16:35.496118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.496134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.699 [2024-11-19 17:16:35.496149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.496165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.699 [2024-11-19 17:16:35.496180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.496197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.699 [2024-11-19 17:16:35.496212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.699 [2024-11-19 17:16:35.496236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.699 [2024-11-19 17:16:35.496252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.496268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.700 [2024-11-19 17:16:35.496283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.496299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.700 [2024-11-19 17:16:35.496314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.496344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:42408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.700 [2024-11-19 17:16:35.496357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.496371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.700 [2024-11-19 17:16:35.496400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.496414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.496426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.496440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.496456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.496470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:42496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.496483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.496496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:42504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.496509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.496523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:42512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.496536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.496550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.496562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.496577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.496589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.496604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:42536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.496623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.496638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.496651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.496665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:42552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.496677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.496691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:42560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.496704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.496718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.496731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.496745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.496757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.496771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:42584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.496794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.496811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.496824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.496837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.496850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.496864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.496877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.496891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:42616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.496903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.496916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.496929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.496943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:42632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.496955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.496976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.496989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.497003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.497015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.497046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.497060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.497084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.497098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.497112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.497125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.497140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.497153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.497168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.497180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.497195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.497208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.497222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:42704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.497235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.497249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.497263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.497277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.497290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.497304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.497317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.497332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.497355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.497386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.497399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.497413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.497426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.700 [2024-11-19 17:16:35.497440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.700 [2024-11-19 17:16:35.497452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.701 [2024-11-19 17:16:35.497467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.701 [2024-11-19 17:16:35.497480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.701 [2024-11-19 17:16:35.497495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.701 [2024-11-19 17:16:35.497507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.701 [2024-11-19 17:16:35.497521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.701 [2024-11-19 17:16:35.497534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.701 [2024-11-19 17:16:35.497549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.701 [2024-11-19 17:16:35.497562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.701 [2024-11-19 17:16:35.497576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.701 [2024-11-19 17:16:35.497588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.701 [2024-11-19 17:16:35.497603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.701 [2024-11-19 17:16:35.497615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.701 [2024-11-19 17:16:35.497629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.701 [2024-11-19 17:16:35.497642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.701 [2024-11-19 17:16:35.497656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.701 [2024-11-19 17:16:35.497685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.701 [2024-11-19 17:16:35.497699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.701 [2024-11-19 17:16:35.497711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.701 [2024-11-19 17:16:35.497724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.701 [2024-11-19 17:16:35.497742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.701 [2024-11-19 17:16:35.497756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.701 [2024-11-19 17:16:35.497769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.701 [2024-11-19 17:16:35.497791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.701 [2024-11-19 17:16:35.497806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.701 [2024-11-19 17:16:35.497821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.701 [2024-11-19 17:16:35.497833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.701 [2024-11-19 17:16:35.497847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.701 [2024-11-19 17:16:35.497859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.701 [2024-11-19 17:16:35.497873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.701 [2024-11-19 17:16:35.497886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.701 [2024-11-19 17:16:35.497900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.701 [2024-11-19 17:16:35.497913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.701 [2024-11-19 17:16:35.497943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.701 [2024-11-19 17:16:35.497966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.701 [2024-11-19 17:16:35.497981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.701 [2024-11-19 17:16:35.497994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.701 [2024-11-19 17:16:35.498007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.701 [2024-11-19 17:16:35.498036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.701 [2024-11-19 17:16:35.498078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.701 [2024-11-19 17:16:35.498092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.701 [2024-11-19 17:16:35.498103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42920 len:8 PRP1 0x0 PRP2 0x0 00:17:49.701 [2024-11-19 17:16:35.498117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.701 [2024-11-19 17:16:35.498196] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:17:49.701 [2024-11-19 17:16:35.498254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.701 [2024-11-19 17:16:35.498275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.701 [2024-11-19 17:16:35.498301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.701 [2024-11-19 17:16:35.498315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.701 [2024-11-19 17:16:35.498328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.701 [2024-11-19 17:16:35.498341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.701 [2024-11-19 17:16:35.498355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.701 [2024-11-19 17:16:35.498384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.701 [2024-11-19 17:16:35.498398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:17:49.701 [2024-11-19 17:16:35.502350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:17:49.701 [2024-11-19 17:16:35.502411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ebaf30 (9): Bad file descriptor 00:17:49.701 [2024-11-19 17:16:35.527536] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:17:49.701 8848.80 IOPS, 34.57 MiB/s [2024-11-19T17:16:42.234Z] 8739.82 IOPS, 34.14 MiB/s [2024-11-19T17:16:42.234Z] 8710.58 IOPS, 34.03 MiB/s [2024-11-19T17:16:42.234Z] 8743.08 IOPS, 34.15 MiB/s [2024-11-19T17:16:42.234Z] 8724.86 IOPS, 34.08 MiB/s [2024-11-19T17:16:42.234Z] 8735.73 IOPS, 34.12 MiB/s 00:17:49.701 Latency(us) 00:17:49.701 [2024-11-19T17:16:42.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.701 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:49.701 Verification LBA range: start 0x0 length 0x4000 00:17:49.701 NVMe0n1 : 15.01 8738.68 34.14 242.31 0.00 14223.03 826.65 52905.43 00:17:49.701 [2024-11-19T17:16:42.234Z] =================================================================================================================== 00:17:49.701 [2024-11-19T17:16:42.234Z] Total : 8738.68 34.14 242.31 0.00 14223.03 826.65 52905.43 00:17:49.701 Received shutdown signal, test time was about 15.000000 seconds 00:17:49.701 00:17:49.701 Latency(us) 00:17:49.701 [2024-11-19T17:16:42.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.701 [2024-11-19T17:16:42.234Z] =================================================================================================================== 00:17:49.701 [2024-11-19T17:16:42.234Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:49.701 17:16:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:17:49.701 17:16:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:17:49.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:49.701 17:16:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:17:49.701 17:16:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=88884 00:17:49.701 17:16:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 88884 /var/tmp/bdevperf.sock 00:17:49.701 17:16:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:17:49.701 17:16:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 88884 ']' 00:17:49.701 17:16:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:49.701 17:16:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.701 17:16:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:49.701 17:16:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.701 17:16:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:49.701 17:16:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:49.701 17:16:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:17:49.701 17:16:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:49.701 [2024-11-19 17:16:42.188022] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:49.960 17:16:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:17:50.219 [2024-11-19 17:16:42.532612] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:17:50.219 17:16:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:50.478 NVMe0n1 00:17:50.478 17:16:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:51.046 00:17:51.046 17:16:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:51.305 00:17:51.305 17:16:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:17:51.305 17:16:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:51.564 17:16:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:51.823 17:16:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:17:55.109 17:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:55.109 17:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:17:55.109 17:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:55.109 17:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=89013 00:17:55.109 17:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 89013 00:17:56.485 { 00:17:56.485 "results": [ 00:17:56.485 { 00:17:56.485 "job": "NVMe0n1", 00:17:56.485 "core_mask": "0x1", 00:17:56.485 "workload": "verify", 00:17:56.485 "status": "finished", 00:17:56.485 "verify_range": { 00:17:56.485 "start": 0, 00:17:56.485 "length": 16384 00:17:56.485 }, 00:17:56.485 "queue_depth": 128, 00:17:56.485 "io_size": 4096, 00:17:56.485 "runtime": 1.007953, 00:17:56.485 "iops": 10063.961315656583, 00:17:56.485 "mibps": 39.31234888928353, 00:17:56.485 "io_failed": 0, 00:17:56.485 "io_timeout": 0, 00:17:56.485 "avg_latency_us": 12653.266555778606, 00:17:56.485 "min_latency_us": 1765.0036363636364, 00:17:56.485 "max_latency_us": 15132.858181818181 00:17:56.485 } 00:17:56.485 ], 00:17:56.485 "core_count": 1 00:17:56.485 } 00:17:56.485 17:16:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:56.485 [2024-11-19 17:16:41.455300] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:17:56.485 [2024-11-19 17:16:41.456287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88884 ] 00:17:56.485 [2024-11-19 17:16:41.609355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.485 [2024-11-19 17:16:41.694926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.485 [2024-11-19 17:16:44.297349] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:17:56.485 [2024-11-19 17:16:44.297576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.485 [2024-11-19 17:16:44.297609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.485 [2024-11-19 17:16:44.297630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.485 [2024-11-19 17:16:44.297644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.485 [2024-11-19 17:16:44.297659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.485 [2024-11-19 17:16:44.297673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.485 [2024-11-19 17:16:44.297689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.485 [2024-11-19 17:16:44.297702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.485 [2024-11-19 17:16:44.297717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:17:56.485 [2024-11-19 17:16:44.297809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:17:56.485 [2024-11-19 17:16:44.297848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1edaf30 (9): Bad file descriptor 00:17:56.485 [2024-11-19 17:16:44.304357] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:17:56.485 Running I/O for 1 seconds... 00:17:56.485 10016.00 IOPS, 39.12 MiB/s 00:17:56.485 Latency(us) 00:17:56.485 [2024-11-19T17:16:49.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.485 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:56.485 Verification LBA range: start 0x0 length 0x4000 00:17:56.485 NVMe0n1 : 1.01 10063.96 39.31 0.00 0.00 12653.27 1765.00 15132.86 00:17:56.485 [2024-11-19T17:16:49.018Z] =================================================================================================================== 00:17:56.485 [2024-11-19T17:16:49.018Z] Total : 10063.96 39.31 0.00 0.00 12653.27 1765.00 15132.86 00:17:56.485 17:16:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:56.485 17:16:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:17:56.749 17:16:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:57.012 17:16:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:57.012 17:16:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:17:57.269 17:16:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:57.527 17:16:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:18:00.811 17:16:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:00.811 17:16:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:18:00.811 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 88884 00:18:00.811 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 88884 ']' 00:18:00.811 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 88884 00:18:00.811 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:00.811 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.811 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88884 00:18:00.811 killing process with pid 88884 00:18:00.811 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:00.811 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:00.811 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88884' 00:18:00.811 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 88884 00:18:00.811 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 88884 00:18:01.071 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:18:01.071 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:01.330 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:01.330 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:01.330 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:18:01.330 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:01.330 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:18:01.330 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:01.330 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:18:01.330 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:01.330 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:01.330 rmmod nvme_tcp 00:18:01.330 rmmod nvme_fabrics 00:18:01.330 rmmod nvme_keyring 00:18:01.330 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:01.330 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:18:01.330 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:18:01.330 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 88527 ']' 00:18:01.330 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 88527 00:18:01.330 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 88527 ']' 00:18:01.330 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 88527 00:18:01.330 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:01.330 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:01.330 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88527 00:18:01.330 killing process with pid 88527 00:18:01.330 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:01.330 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:01.330 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88527' 00:18:01.330 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 88527 00:18:01.330 17:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 88527 00:18:01.589 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:01.589 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:01.589 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:01.589 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:18:01.589 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:18:01.589 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:01.589 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:18:01.590 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:01.590 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:01.590 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:01.590 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:01.590 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:01.590 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:01.590 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:01.848 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:01.848 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:01.848 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:01.848 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:01.848 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:01.848 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:01.849 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:01.849 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:01.849 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:01.849 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.849 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:01.849 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.849 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:18:01.849 00:18:01.849 real 0m33.388s 00:18:01.849 user 2m8.968s 00:18:01.849 sys 0m4.971s 00:18:01.849 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:01.849 ************************************ 00:18:01.849 END TEST nvmf_failover 00:18:01.849 ************************************ 00:18:01.849 17:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:01.849 17:16:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:01.849 17:16:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:01.849 17:16:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:01.849 17:16:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.849 ************************************ 00:18:01.849 START TEST nvmf_host_discovery 00:18:01.849 ************************************ 00:18:01.849 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:02.109 * Looking for test storage... 00:18:02.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:02.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.109 --rc genhtml_branch_coverage=1 00:18:02.109 --rc genhtml_function_coverage=1 00:18:02.109 --rc genhtml_legend=1 00:18:02.109 --rc geninfo_all_blocks=1 00:18:02.109 --rc geninfo_unexecuted_blocks=1 00:18:02.109 00:18:02.109 ' 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:02.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.109 --rc genhtml_branch_coverage=1 00:18:02.109 --rc genhtml_function_coverage=1 00:18:02.109 --rc genhtml_legend=1 00:18:02.109 --rc geninfo_all_blocks=1 00:18:02.109 --rc geninfo_unexecuted_blocks=1 00:18:02.109 00:18:02.109 ' 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:02.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.109 --rc genhtml_branch_coverage=1 00:18:02.109 --rc genhtml_function_coverage=1 00:18:02.109 --rc genhtml_legend=1 00:18:02.109 --rc geninfo_all_blocks=1 00:18:02.109 --rc geninfo_unexecuted_blocks=1 00:18:02.109 00:18:02.109 ' 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:02.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.109 --rc genhtml_branch_coverage=1 00:18:02.109 --rc genhtml_function_coverage=1 00:18:02.109 --rc genhtml_legend=1 00:18:02.109 --rc geninfo_all_blocks=1 00:18:02.109 --rc geninfo_unexecuted_blocks=1 00:18:02.109 00:18:02.109 ' 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.109 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:02.110 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:02.110 Cannot find device "nvmf_init_br" 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:02.110 Cannot find device "nvmf_init_br2" 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:02.110 Cannot find device "nvmf_tgt_br" 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:02.110 Cannot find device "nvmf_tgt_br2" 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:02.110 Cannot find device "nvmf_init_br" 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:18:02.110 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:02.370 Cannot find device "nvmf_init_br2" 00:18:02.370 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:18:02.370 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:02.370 Cannot find device "nvmf_tgt_br" 00:18:02.370 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:18:02.370 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:02.370 Cannot find device "nvmf_tgt_br2" 00:18:02.370 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:18:02.370 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:02.371 Cannot find device "nvmf_br" 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:02.371 Cannot find device "nvmf_init_if" 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:02.371 Cannot find device "nvmf_init_if2" 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:02.371 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:02.371 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:02.371 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:02.630 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:02.630 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:02.630 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:02.630 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:02.630 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.113 ms 00:18:02.630 00:18:02.630 --- 10.0.0.3 ping statistics --- 00:18:02.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.630 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:18:02.630 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:02.630 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:02.630 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.078 ms 00:18:02.630 00:18:02.630 --- 10.0.0.4 ping statistics --- 00:18:02.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.630 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:18:02.630 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:02.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:02.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:18:02.630 00:18:02.630 --- 10.0.0.1 ping statistics --- 00:18:02.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.630 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:18:02.630 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:02.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:02.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:18:02.630 00:18:02.630 --- 10.0.0.2 ping statistics --- 00:18:02.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.630 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:18:02.630 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:02.630 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:18:02.630 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:02.630 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:02.630 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:02.630 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:02.631 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:02.631 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:02.631 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:02.631 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:18:02.631 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:02.631 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:02.631 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.631 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=89368 00:18:02.631 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 89368 00:18:02.631 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:02.631 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 89368 ']' 00:18:02.631 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.631 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.631 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.631 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.631 17:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.631 [2024-11-19 17:16:55.025070] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:18:02.631 [2024-11-19 17:16:55.025193] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.890 [2024-11-19 17:16:55.181195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.890 [2024-11-19 17:16:55.251081] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.890 [2024-11-19 17:16:55.251147] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.890 [2024-11-19 17:16:55.251161] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.890 [2024-11-19 17:16:55.251172] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.890 [2024-11-19 17:16:55.251181] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.890 [2024-11-19 17:16:55.251676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.890 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.890 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:18:02.890 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:02.890 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:02.890 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:03.149 [2024-11-19 17:16:55.431630] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:03.149 [2024-11-19 17:16:55.439812] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:03.149 null0 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:03.149 null1 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=89404 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 89404 /tmp/host.sock 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 89404 ']' 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:03.149 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:03.149 17:16:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:03.149 [2024-11-19 17:16:55.534652] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:18:03.149 [2024-11-19 17:16:55.534762] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89404 ] 00:18:03.408 [2024-11-19 17:16:55.688099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.408 [2024-11-19 17:16:55.761774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.343 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:04.343 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:18:04.343 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:04.343 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:18:04.343 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.343 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.343 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.343 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:18:04.343 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.343 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.343 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.343 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:18:04.343 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:18:04.343 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:04.343 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.343 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:04.343 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.343 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:04.343 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:04.343 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.343 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:04.344 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.603 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:18:04.603 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:18:04.603 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:04.603 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.603 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.603 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:04.603 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:04.603 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:04.603 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.603 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:18:04.603 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:04.603 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.603 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.603 [2024-11-19 17:16:56.948214] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:04.603 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.603 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:18:04.603 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:04.603 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.603 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:04.603 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.603 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:04.603 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:04.603 17:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.603 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:18:04.603 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:18:04.603 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:04.603 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:04.603 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.603 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.603 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:04.603 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:04.603 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.603 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:18:04.603 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:18:04.603 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:04.603 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:04.603 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:04.603 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:04.603 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:04.603 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:04.603 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:04.603 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:04.603 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:04.603 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.603 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.603 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.603 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:04.603 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:18:04.603 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:04.603 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:04.604 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:18:04.604 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.604 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.604 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.604 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:04.604 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:04.604 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:04.604 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:04.604 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:04.604 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:04.604 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:04.604 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.604 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.604 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:04.604 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:04.604 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:04.862 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.862 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:18:04.862 17:16:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:18:05.120 [2024-11-19 17:16:57.585654] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:05.120 [2024-11-19 17:16:57.585701] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:05.120 [2024-11-19 17:16:57.585720] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:05.379 [2024-11-19 17:16:57.672744] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:18:05.379 [2024-11-19 17:16:57.734093] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:18:05.379 [2024-11-19 17:16:57.734868] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1151ba0:1 started. 00:18:05.379 [2024-11-19 17:16:57.736760] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:05.379 [2024-11-19 17:16:57.736795] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:05.379 [2024-11-19 17:16:57.744004] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1151ba0 was disconnected and freed. delete nvme_qpair. 00:18:05.946 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:05.947 [2024-11-19 17:16:58.445642] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x10cc570:1 started. 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:05.947 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:05.947 [2024-11-19 17:16:58.454306] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x10cc570 was disconnected and freed. delete nvme_qpair. 00:18:06.206 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.206 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:06.206 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:06.206 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:18:06.206 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:06.206 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:06.206 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:06.206 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:06.206 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:06.206 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:06.206 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:06.206 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:18:06.206 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:06.206 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.206 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:06.206 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.206 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:06.206 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:06.206 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:06.206 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:06.206 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:18:06.206 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.206 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:06.206 [2024-11-19 17:16:58.565045] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:06.206 [2024-11-19 17:16:58.566251] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:18:06.206 [2024-11-19 17:16:58.566278] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:06.206 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.206 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:06.206 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:06.206 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:06.207 [2024-11-19 17:16:58.651971] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:06.207 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.207 [2024-11-19 17:16:58.710260] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:18:06.207 [2024-11-19 17:16:58.710301] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:06.207 [2024-11-19 17:16:58.710310] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:06.207 [2024-11-19 17:16:58.710315] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:06.466 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:18:06.466 17:16:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:07.402 [2024-11-19 17:16:59.857891] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:18:07.402 [2024-11-19 17:16:59.857918] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:07.402 [2024-11-19 17:16:59.864440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.402 [2024-11-19 17:16:59.864480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.402 [2024-11-19 17:16:59.864491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.402 [2024-11-19 17:16:59.864500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.402 [2024-11-19 17:16:59.864508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.402 [2024-11-19 17:16:59.864515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.402 [2024-11-19 17:16:59.864524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.402 [2024-11-19 17:16:59.864531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.402 [2024-11-19 17:16:59.864540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1124280 is same with the state(6) to be set 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:07.402 [2024-11-19 17:16:59.874393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1124280 (9): Bad file descriptor 00:18:07.402 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.402 [2024-11-19 17:16:59.884408] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:07.402 [2024-11-19 17:16:59.884434] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:07.402 [2024-11-19 17:16:59.884443] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:07.402 [2024-11-19 17:16:59.884449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:07.402 [2024-11-19 17:16:59.884472] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:07.402 [2024-11-19 17:16:59.884526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:07.402 [2024-11-19 17:16:59.884544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1124280 with addr=10.0.0.3, port=4420 00:18:07.402 [2024-11-19 17:16:59.884554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1124280 is same with the state(6) to be set 00:18:07.402 [2024-11-19 17:16:59.884568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1124280 (9): Bad file descriptor 00:18:07.402 [2024-11-19 17:16:59.884581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:07.402 [2024-11-19 17:16:59.884589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:07.402 [2024-11-19 17:16:59.884598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:07.402 [2024-11-19 17:16:59.884606] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:07.402 [2024-11-19 17:16:59.884612] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:07.402 [2024-11-19 17:16:59.884616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:07.402 [2024-11-19 17:16:59.894478] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:07.402 [2024-11-19 17:16:59.894499] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:07.402 [2024-11-19 17:16:59.894504] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:07.402 [2024-11-19 17:16:59.894509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:07.402 [2024-11-19 17:16:59.894527] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:07.402 [2024-11-19 17:16:59.894571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:07.403 [2024-11-19 17:16:59.894587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1124280 with addr=10.0.0.3, port=4420 00:18:07.403 [2024-11-19 17:16:59.894597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1124280 is same with the state(6) to be set 00:18:07.403 [2024-11-19 17:16:59.894609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1124280 (9): Bad file descriptor 00:18:07.403 [2024-11-19 17:16:59.894622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:07.403 [2024-11-19 17:16:59.894629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:07.403 [2024-11-19 17:16:59.894637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:07.403 [2024-11-19 17:16:59.894644] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:07.403 [2024-11-19 17:16:59.894650] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:07.403 [2024-11-19 17:16:59.894654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:07.403 [2024-11-19 17:16:59.904537] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:07.403 [2024-11-19 17:16:59.904560] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:07.403 [2024-11-19 17:16:59.904565] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:07.403 [2024-11-19 17:16:59.904570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:07.403 [2024-11-19 17:16:59.904589] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:07.403 [2024-11-19 17:16:59.904635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:07.403 [2024-11-19 17:16:59.904653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1124280 with addr=10.0.0.3, port=4420 00:18:07.403 [2024-11-19 17:16:59.904663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1124280 is same with the state(6) to be set 00:18:07.403 [2024-11-19 17:16:59.904677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1124280 (9): Bad file descriptor 00:18:07.403 [2024-11-19 17:16:59.904690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:07.403 [2024-11-19 17:16:59.904698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:07.403 [2024-11-19 17:16:59.904706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:07.403 [2024-11-19 17:16:59.904713] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:07.403 [2024-11-19 17:16:59.904718] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:07.403 [2024-11-19 17:16:59.904722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:07.403 [2024-11-19 17:16:59.914597] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:07.403 [2024-11-19 17:16:59.914618] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:07.403 [2024-11-19 17:16:59.914623] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:07.403 [2024-11-19 17:16:59.914627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:07.403 [2024-11-19 17:16:59.914646] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:07.403 [2024-11-19 17:16:59.914689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:07.403 [2024-11-19 17:16:59.914705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1124280 with addr=10.0.0.3, port=4420 00:18:07.403 [2024-11-19 17:16:59.914714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1124280 is same with the state(6) to be set 00:18:07.403 [2024-11-19 17:16:59.914727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1124280 (9): Bad file descriptor 00:18:07.403 [2024-11-19 17:16:59.914751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:07.403 [2024-11-19 17:16:59.914761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:07.403 [2024-11-19 17:16:59.914770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:07.403 [2024-11-19 17:16:59.914784] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:07.403 [2024-11-19 17:16:59.914789] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:07.403 [2024-11-19 17:16:59.914793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:07.403 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.403 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:07.403 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:07.403 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:07.403 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:07.403 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:07.403 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:07.403 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:07.403 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:07.403 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:07.403 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.403 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:07.403 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:07.403 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:07.403 [2024-11-19 17:16:59.924654] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:07.403 [2024-11-19 17:16:59.924675] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:07.403 [2024-11-19 17:16:59.924680] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:07.403 [2024-11-19 17:16:59.924684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:07.403 [2024-11-19 17:16:59.924702] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:07.403 [2024-11-19 17:16:59.924760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:07.403 [2024-11-19 17:16:59.924778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1124280 with addr=10.0.0.3, port=4420 00:18:07.403 [2024-11-19 17:16:59.924788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1124280 is same with the state(6) to be set 00:18:07.403 [2024-11-19 17:16:59.924801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1124280 (9): Bad file descriptor 00:18:07.403 [2024-11-19 17:16:59.924814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:07.403 [2024-11-19 17:16:59.924822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:07.403 [2024-11-19 17:16:59.924830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:07.403 [2024-11-19 17:16:59.924837] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:07.403 [2024-11-19 17:16:59.924842] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:07.403 [2024-11-19 17:16:59.924846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:07.661 [2024-11-19 17:16:59.934710] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:07.661 [2024-11-19 17:16:59.934729] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:07.661 [2024-11-19 17:16:59.934734] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:07.661 [2024-11-19 17:16:59.934750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:07.661 [2024-11-19 17:16:59.934779] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:07.661 [2024-11-19 17:16:59.934822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:07.661 [2024-11-19 17:16:59.934839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1124280 with addr=10.0.0.3, port=4420 00:18:07.661 [2024-11-19 17:16:59.934847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1124280 is same with the state(6) to be set 00:18:07.661 [2024-11-19 17:16:59.934859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1124280 (9): Bad file descriptor 00:18:07.662 [2024-11-19 17:16:59.934872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:07.662 [2024-11-19 17:16:59.934880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:07.662 [2024-11-19 17:16:59.934887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:07.662 [2024-11-19 17:16:59.934894] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:07.662 [2024-11-19 17:16:59.934899] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:07.662 [2024-11-19 17:16:59.934903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:07.662 [2024-11-19 17:16:59.943941] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:18:07.662 [2024-11-19 17:16:59.943967] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:07.662 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.662 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:07.662 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:07.662 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:07.662 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:07.662 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:07.662 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:07.662 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:07.662 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:07.662 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:07.662 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:07.662 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:07.662 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.662 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:07.662 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:07.662 17:16:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:07.662 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.919 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:18:07.919 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:07.919 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:18:07.919 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:18:07.919 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:07.919 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:07.919 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:07.919 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:07.919 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:07.919 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:07.919 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:07.919 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:07.919 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.919 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:07.919 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.919 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:18:07.919 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:18:07.919 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:07.919 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:07.919 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:07.919 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.919 17:17:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:08.851 [2024-11-19 17:17:01.286689] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:08.851 [2024-11-19 17:17:01.286724] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:08.851 [2024-11-19 17:17:01.286753] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:08.851 [2024-11-19 17:17:01.373791] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:18:09.110 [2024-11-19 17:17:01.439073] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:18:09.110 [2024-11-19 17:17:01.439599] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x115d900:1 started. 00:18:09.110 [2024-11-19 17:17:01.441789] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:09.110 [2024-11-19 17:17:01.441822] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:09.110 [2024-11-19 17:17:01.445144] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x115d900 was disconnected and freed. delete nvme_qpair. 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:09.110 2024/11/19 17:17:01 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:18:09.110 request: 00:18:09.110 { 00:18:09.110 "method": "bdev_nvme_start_discovery", 00:18:09.110 "params": { 00:18:09.110 "name": "nvme", 00:18:09.110 "trtype": "tcp", 00:18:09.110 "traddr": "10.0.0.3", 00:18:09.110 "adrfam": "ipv4", 00:18:09.110 "trsvcid": "8009", 00:18:09.110 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:09.110 "wait_for_attach": true 00:18:09.110 } 00:18:09.110 } 00:18:09.110 Got JSON-RPC error response 00:18:09.110 GoRPCClient: error on JSON-RPC call 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.110 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:09.111 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:09.111 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:18:09.111 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:09.111 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:09.111 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.111 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:09.111 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.111 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:09.111 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.111 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:09.111 2024/11/19 17:17:01 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:18:09.111 request: 00:18:09.111 { 00:18:09.111 "method": "bdev_nvme_start_discovery", 00:18:09.111 "params": { 00:18:09.111 "name": "nvme_second", 00:18:09.111 "trtype": "tcp", 00:18:09.111 "traddr": "10.0.0.3", 00:18:09.111 "adrfam": "ipv4", 00:18:09.111 "trsvcid": "8009", 00:18:09.111 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:09.111 "wait_for_attach": true 00:18:09.111 } 00:18:09.111 } 00:18:09.111 Got JSON-RPC error response 00:18:09.111 GoRPCClient: error on JSON-RPC call 00:18:09.111 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:09.111 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:18:09.111 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:09.111 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:09.111 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:09.111 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:18:09.111 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:09.111 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.111 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:09.111 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:09.111 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:09.111 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:09.111 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.370 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:18:09.370 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:18:09.370 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:09.370 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.370 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:09.370 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:09.370 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:09.370 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:09.370 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.370 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:09.370 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:09.370 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:18:09.370 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:09.370 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:09.370 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.370 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:09.370 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.370 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:09.370 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.370 17:17:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:10.386 [2024-11-19 17:17:02.734509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:10.386 [2024-11-19 17:17:02.734585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115d720 with addr=10.0.0.3, port=8010 00:18:10.386 [2024-11-19 17:17:02.734614] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:10.386 [2024-11-19 17:17:02.734623] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:10.386 [2024-11-19 17:17:02.734632] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:18:11.330 [2024-11-19 17:17:03.734371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.331 [2024-11-19 17:17:03.734411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115d720 with addr=10.0.0.3, port=8010 00:18:11.331 [2024-11-19 17:17:03.734425] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:11.331 [2024-11-19 17:17:03.734433] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:11.331 [2024-11-19 17:17:03.734440] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:18:12.265 [2024-11-19 17:17:04.734298] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:18:12.266 2024/11/19 17:17:04 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:18:12.266 request: 00:18:12.266 { 00:18:12.266 "method": "bdev_nvme_start_discovery", 00:18:12.266 "params": { 00:18:12.266 "name": "nvme_second", 00:18:12.266 "trtype": "tcp", 00:18:12.266 "traddr": "10.0.0.3", 00:18:12.266 "adrfam": "ipv4", 00:18:12.266 "trsvcid": "8010", 00:18:12.266 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:12.266 "wait_for_attach": false, 00:18:12.266 "attach_timeout_ms": 3000 00:18:12.266 } 00:18:12.266 } 00:18:12.266 Got JSON-RPC error response 00:18:12.266 GoRPCClient: error on JSON-RPC call 00:18:12.266 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:12.266 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:18:12.266 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:12.266 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:12.266 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:12.266 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:18:12.266 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:12.266 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:12.266 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.266 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:12.266 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:12.266 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:12.266 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.266 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:18:12.266 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:18:12.266 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 89404 00:18:12.266 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:18:12.266 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:12.266 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:18:12.525 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:12.525 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:18:12.525 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:12.525 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:12.525 rmmod nvme_tcp 00:18:12.525 rmmod nvme_fabrics 00:18:12.525 rmmod nvme_keyring 00:18:12.525 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:12.525 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:18:12.525 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:18:12.525 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 89368 ']' 00:18:12.525 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 89368 00:18:12.525 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 89368 ']' 00:18:12.525 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 89368 00:18:12.525 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:18:12.525 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.525 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89368 00:18:12.525 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:12.525 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:12.525 killing process with pid 89368 00:18:12.525 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89368' 00:18:12.525 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 89368 00:18:12.525 17:17:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 89368 00:18:12.784 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:12.784 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:12.784 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:12.784 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:18:12.784 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:18:12.784 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:12.784 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:18:12.784 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:12.784 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:12.784 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:12.784 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:12.784 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:12.784 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:12.784 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:12.784 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:12.784 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:12.784 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:12.784 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:12.784 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:12.784 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:13.043 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:13.043 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:13.043 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:13.043 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.043 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:13.043 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.043 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:18:13.043 00:18:13.043 real 0m11.052s 00:18:13.043 user 0m21.755s 00:18:13.043 sys 0m1.774s 00:18:13.043 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.043 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:13.043 ************************************ 00:18:13.043 END TEST nvmf_host_discovery 00:18:13.043 ************************************ 00:18:13.043 17:17:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:13.043 17:17:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:13.043 17:17:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.043 17:17:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.043 ************************************ 00:18:13.043 START TEST nvmf_host_multipath_status 00:18:13.043 ************************************ 00:18:13.043 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:13.043 * Looking for test storage... 00:18:13.043 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:13.043 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:13.043 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:18:13.043 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:13.302 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:13.302 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:13.302 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:13.302 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:13.302 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.302 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:18:13.302 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:18:13.302 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:18:13.302 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:18:13.302 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:18:13.302 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:18:13.302 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:13.302 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:18:13.302 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:18:13.302 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:13.302 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.302 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:18:13.302 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:18:13.302 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:13.302 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:18:13.302 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:18:13.302 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:18:13.302 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:18:13.302 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:13.302 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:13.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.303 --rc genhtml_branch_coverage=1 00:18:13.303 --rc genhtml_function_coverage=1 00:18:13.303 --rc genhtml_legend=1 00:18:13.303 --rc geninfo_all_blocks=1 00:18:13.303 --rc geninfo_unexecuted_blocks=1 00:18:13.303 00:18:13.303 ' 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:13.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.303 --rc genhtml_branch_coverage=1 00:18:13.303 --rc genhtml_function_coverage=1 00:18:13.303 --rc genhtml_legend=1 00:18:13.303 --rc geninfo_all_blocks=1 00:18:13.303 --rc geninfo_unexecuted_blocks=1 00:18:13.303 00:18:13.303 ' 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:13.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.303 --rc genhtml_branch_coverage=1 00:18:13.303 --rc genhtml_function_coverage=1 00:18:13.303 --rc genhtml_legend=1 00:18:13.303 --rc geninfo_all_blocks=1 00:18:13.303 --rc geninfo_unexecuted_blocks=1 00:18:13.303 00:18:13.303 ' 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:13.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.303 --rc genhtml_branch_coverage=1 00:18:13.303 --rc genhtml_function_coverage=1 00:18:13.303 --rc genhtml_legend=1 00:18:13.303 --rc geninfo_all_blocks=1 00:18:13.303 --rc geninfo_unexecuted_blocks=1 00:18:13.303 00:18:13.303 ' 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:13.303 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:13.303 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:13.304 Cannot find device "nvmf_init_br" 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:13.304 Cannot find device "nvmf_init_br2" 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:13.304 Cannot find device "nvmf_tgt_br" 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:13.304 Cannot find device "nvmf_tgt_br2" 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:13.304 Cannot find device "nvmf_init_br" 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:13.304 Cannot find device "nvmf_init_br2" 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:13.304 Cannot find device "nvmf_tgt_br" 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:13.304 Cannot find device "nvmf_tgt_br2" 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:13.304 Cannot find device "nvmf_br" 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:13.304 Cannot find device "nvmf_init_if" 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:13.304 Cannot find device "nvmf_init_if2" 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:13.304 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:13.304 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:13.304 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:13.563 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:13.563 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:18:13.563 00:18:13.563 --- 10.0.0.3 ping statistics --- 00:18:13.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.563 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:13.563 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:13.563 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:18:13.563 00:18:13.563 --- 10.0.0.4 ping statistics --- 00:18:13.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.563 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:13.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:13.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:18:13.563 00:18:13.563 --- 10.0.0.1 ping statistics --- 00:18:13.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.563 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:13.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:13.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:18:13.563 00:18:13.563 --- 10.0.0.2 ping statistics --- 00:18:13.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.563 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:13.563 17:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:13.563 17:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:18:13.563 17:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:13.563 17:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:13.563 17:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:13.563 17:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:13.563 17:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=89949 00:18:13.563 17:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 89949 00:18:13.563 17:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 89949 ']' 00:18:13.563 17:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.563 17:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.563 17:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.563 17:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.563 17:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:13.563 [2024-11-19 17:17:06.080441] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:18:13.563 [2024-11-19 17:17:06.080566] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.822 [2024-11-19 17:17:06.215632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:13.822 [2024-11-19 17:17:06.266220] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.822 [2024-11-19 17:17:06.266280] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.822 [2024-11-19 17:17:06.266291] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.822 [2024-11-19 17:17:06.266298] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.822 [2024-11-19 17:17:06.266304] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.822 [2024-11-19 17:17:06.267551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.822 [2024-11-19 17:17:06.267569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.081 17:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.081 17:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:18:14.081 17:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:14.081 17:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:14.081 17:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:14.081 17:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.081 17:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=89949 00:18:14.081 17:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:14.339 [2024-11-19 17:17:06.687999] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:14.339 17:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:14.598 Malloc0 00:18:14.598 17:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:14.856 17:17:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:15.114 17:17:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:15.372 [2024-11-19 17:17:07.790576] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:15.372 17:17:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:15.631 [2024-11-19 17:17:08.102844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:15.631 17:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=90039 00:18:15.631 17:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:15.631 17:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:15.631 17:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 90039 /var/tmp/bdevperf.sock 00:18:15.631 17:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 90039 ']' 00:18:15.631 17:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:15.631 17:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.631 17:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:15.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:15.631 17:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.631 17:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:17.008 17:17:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.008 17:17:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:18:17.008 17:17:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:17.008 17:17:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:17.574 Nvme0n1 00:18:17.574 17:17:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:17.833 Nvme0n1 00:18:17.833 17:17:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:18:17.833 17:17:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:20.365 17:17:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:18:20.365 17:17:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:18:20.365 17:17:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:20.623 17:17:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:18:21.558 17:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:18:21.558 17:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:21.558 17:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:21.558 17:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:22.125 17:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:22.125 17:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:22.125 17:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:22.125 17:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:22.384 17:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:22.384 17:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:22.384 17:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:22.384 17:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:22.642 17:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:22.642 17:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:22.643 17:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:22.643 17:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:22.900 17:17:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:22.900 17:17:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:22.900 17:17:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:22.900 17:17:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:23.159 17:17:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:23.159 17:17:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:23.159 17:17:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:23.159 17:17:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:23.417 17:17:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:23.417 17:17:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:18:23.417 17:17:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:23.675 17:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:23.933 17:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:18:25.326 17:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:18:25.326 17:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:25.326 17:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:25.326 17:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:25.326 17:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:25.326 17:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:25.326 17:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:25.326 17:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:25.894 17:17:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:25.894 17:17:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:25.894 17:17:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:25.894 17:17:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:26.153 17:17:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:26.153 17:17:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:26.153 17:17:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:26.153 17:17:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:26.412 17:17:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:26.412 17:17:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:26.412 17:17:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:26.412 17:17:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:26.979 17:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:26.979 17:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:26.979 17:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:26.979 17:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:27.237 17:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:27.237 17:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:18:27.237 17:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:27.495 17:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:18:27.754 17:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:18:28.691 17:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:18:28.691 17:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:28.691 17:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:28.691 17:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:29.258 17:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:29.258 17:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:29.258 17:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:29.258 17:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:29.258 17:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:29.258 17:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:29.258 17:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:29.258 17:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:29.826 17:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:29.826 17:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:29.826 17:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:29.826 17:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:30.086 17:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:30.086 17:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:30.086 17:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:30.086 17:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:30.344 17:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:30.344 17:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:30.344 17:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:30.344 17:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:30.604 17:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:30.604 17:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:18:30.604 17:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:30.863 17:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:31.122 17:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:18:32.058 17:17:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:18:32.058 17:17:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:32.058 17:17:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:32.058 17:17:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:32.623 17:17:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:32.623 17:17:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:32.623 17:17:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:32.623 17:17:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:32.881 17:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:32.881 17:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:32.881 17:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:32.881 17:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:33.143 17:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:33.143 17:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:33.143 17:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:33.143 17:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:33.401 17:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:33.401 17:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:33.401 17:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:33.401 17:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:33.660 17:17:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:33.660 17:17:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:33.660 17:17:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:33.660 17:17:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:33.919 17:17:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:33.919 17:17:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:18:33.919 17:17:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:34.514 17:17:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:34.514 17:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:18:35.891 17:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:18:35.891 17:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:35.891 17:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:35.891 17:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:35.891 17:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:35.891 17:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:35.891 17:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:35.891 17:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:36.150 17:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:36.150 17:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:36.150 17:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:36.150 17:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:36.409 17:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:36.409 17:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:36.409 17:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:36.667 17:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:36.926 17:17:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:36.926 17:17:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:36.926 17:17:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:36.926 17:17:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:37.184 17:17:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:37.184 17:17:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:37.184 17:17:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:37.184 17:17:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:37.443 17:17:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:37.443 17:17:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:18:37.443 17:17:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:37.702 17:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:38.270 17:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:18:39.206 17:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:18:39.206 17:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:39.206 17:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:39.206 17:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:39.465 17:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:39.465 17:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:39.465 17:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:39.465 17:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:39.724 17:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:39.724 17:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:39.724 17:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:39.724 17:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:39.982 17:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:39.982 17:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:39.982 17:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:39.982 17:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:40.241 17:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:40.241 17:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:40.241 17:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:40.241 17:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:40.499 17:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:40.499 17:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:40.499 17:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:40.499 17:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:40.759 17:17:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:40.759 17:17:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:18:41.018 17:17:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:18:41.018 17:17:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:18:41.277 17:17:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:41.845 17:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:18:42.781 17:17:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:18:42.781 17:17:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:42.781 17:17:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:42.781 17:17:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:43.040 17:17:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:43.040 17:17:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:43.040 17:17:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:43.040 17:17:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:43.299 17:17:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:43.299 17:17:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:43.299 17:17:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:43.299 17:17:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:43.557 17:17:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:43.557 17:17:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:43.557 17:17:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:43.557 17:17:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:43.816 17:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:43.816 17:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:43.816 17:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:43.816 17:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:44.075 17:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:44.075 17:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:44.075 17:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:44.075 17:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:44.334 17:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:44.334 17:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:18:44.334 17:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:44.593 17:17:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:45.160 17:17:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:18:46.095 17:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:18:46.095 17:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:46.095 17:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:46.095 17:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:46.354 17:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:46.354 17:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:46.354 17:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:46.354 17:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:46.613 17:17:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:46.613 17:17:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:46.613 17:17:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:46.613 17:17:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:46.871 17:17:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:46.871 17:17:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:46.871 17:17:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:46.872 17:17:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:47.442 17:17:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:47.442 17:17:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:47.442 17:17:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:47.442 17:17:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:47.699 17:17:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:47.699 17:17:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:47.699 17:17:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:47.699 17:17:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:47.957 17:17:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:47.957 17:17:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:18:47.957 17:17:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:48.523 17:17:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:18:48.782 17:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:18:49.716 17:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:18:49.716 17:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:49.716 17:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:49.716 17:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:49.975 17:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:49.975 17:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:49.975 17:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:49.975 17:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:50.234 17:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:50.234 17:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:50.234 17:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:50.234 17:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:50.493 17:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:50.493 17:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:50.493 17:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:50.493 17:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:50.753 17:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:50.753 17:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:50.753 17:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:50.753 17:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:51.012 17:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:51.012 17:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:51.012 17:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:51.012 17:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:51.580 17:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:51.580 17:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:18:51.580 17:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:51.580 17:17:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:51.838 17:17:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:18:53.223 17:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:18:53.223 17:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:53.223 17:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:53.223 17:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:53.223 17:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:53.223 17:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:53.223 17:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:53.223 17:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:53.498 17:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:53.498 17:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:53.498 17:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:53.498 17:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:53.757 17:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:53.757 17:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:53.757 17:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:53.757 17:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:54.015 17:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:54.016 17:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:54.016 17:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.016 17:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:54.583 17:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:54.583 17:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:54.583 17:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.583 17:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:54.843 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:54.843 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 90039 00:18:54.843 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 90039 ']' 00:18:54.843 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 90039 00:18:54.843 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:18:54.843 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.843 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90039 00:18:54.843 killing process with pid 90039 00:18:54.843 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:54.843 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:54.843 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90039' 00:18:54.843 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 90039 00:18:54.843 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 90039 00:18:54.843 { 00:18:54.843 "results": [ 00:18:54.843 { 00:18:54.843 "job": "Nvme0n1", 00:18:54.843 "core_mask": "0x4", 00:18:54.843 "workload": "verify", 00:18:54.843 "status": "terminated", 00:18:54.843 "verify_range": { 00:18:54.843 "start": 0, 00:18:54.843 "length": 16384 00:18:54.843 }, 00:18:54.843 "queue_depth": 128, 00:18:54.843 "io_size": 4096, 00:18:54.843 "runtime": 36.732479, 00:18:54.843 "iops": 7991.619623603406, 00:18:54.843 "mibps": 31.217264154700803, 00:18:54.843 "io_failed": 0, 00:18:54.843 "io_timeout": 0, 00:18:54.843 "avg_latency_us": 15986.381867929857, 00:18:54.843 "min_latency_us": 181.52727272727273, 00:18:54.843 "max_latency_us": 4026531.84 00:18:54.843 } 00:18:54.843 ], 00:18:54.843 "core_count": 1 00:18:54.843 } 00:18:55.105 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 90039 00:18:55.105 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:55.105 [2024-11-19 17:17:08.187783] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:18:55.105 [2024-11-19 17:17:08.187923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90039 ] 00:18:55.105 [2024-11-19 17:17:08.337433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.105 [2024-11-19 17:17:08.408438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:55.105 Running I/O for 90 seconds... 00:18:55.105 8199.00 IOPS, 32.03 MiB/s [2024-11-19T17:17:47.638Z] 8116.50 IOPS, 31.71 MiB/s [2024-11-19T17:17:47.638Z] 8186.33 IOPS, 31.98 MiB/s [2024-11-19T17:17:47.638Z] 8213.25 IOPS, 32.08 MiB/s [2024-11-19T17:17:47.638Z] 8362.00 IOPS, 32.66 MiB/s [2024-11-19T17:17:47.638Z] 8531.33 IOPS, 33.33 MiB/s [2024-11-19T17:17:47.638Z] 8759.14 IOPS, 34.22 MiB/s [2024-11-19T17:17:47.638Z] 8700.75 IOPS, 33.99 MiB/s [2024-11-19T17:17:47.638Z] 8731.44 IOPS, 34.11 MiB/s [2024-11-19T17:17:47.638Z] 8726.40 IOPS, 34.09 MiB/s [2024-11-19T17:17:47.638Z] 8791.55 IOPS, 34.34 MiB/s [2024-11-19T17:17:47.638Z] 8818.50 IOPS, 34.45 MiB/s [2024-11-19T17:17:47.638Z] 8831.69 IOPS, 34.50 MiB/s [2024-11-19T17:17:47.638Z] 8922.36 IOPS, 34.85 MiB/s [2024-11-19T17:17:47.638Z] 8929.33 IOPS, 34.88 MiB/s [2024-11-19T17:17:47.638Z] 8944.31 IOPS, 34.94 MiB/s [2024-11-19T17:17:47.638Z] [2024-11-19 17:17:26.686446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.105 [2024-11-19 17:17:26.686510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:55.105 [2024-11-19 17:17:26.686562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.105 [2024-11-19 17:17:26.686581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:55.105 [2024-11-19 17:17:26.686601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.105 [2024-11-19 17:17:26.686615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:55.105 [2024-11-19 17:17:26.686633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.105 [2024-11-19 17:17:26.686646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:55.105 [2024-11-19 17:17:26.686665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.105 [2024-11-19 17:17:26.686678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:55.105 [2024-11-19 17:17:26.686696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.105 [2024-11-19 17:17:26.686710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:55.105 [2024-11-19 17:17:26.686728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.105 [2024-11-19 17:17:26.686763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:55.105 [2024-11-19 17:17:26.686784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.105 [2024-11-19 17:17:26.686797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:55.105 [2024-11-19 17:17:26.686817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.105 [2024-11-19 17:17:26.686830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:55.105 [2024-11-19 17:17:26.686878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.105 [2024-11-19 17:17:26.686894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:55.105 [2024-11-19 17:17:26.686913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.105 [2024-11-19 17:17:26.686927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:55.105 [2024-11-19 17:17:26.686946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.105 [2024-11-19 17:17:26.686959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:55.105 [2024-11-19 17:17:26.686977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.686991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.687010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.687023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.687040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.687052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.687070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.687083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.687101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.687113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.687134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.687147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.687165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.687177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.687194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.687207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.687224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.687238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.687264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.687288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.687306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.687320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.687339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.687354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.687375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.687388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.687939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.687966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.688000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.688035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.688057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.688072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.688093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.688108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.688129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.688144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.688166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.688193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.688215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.688229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.688250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.688264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.688286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.688343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.688364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:114000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.688379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.688399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:114008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.688413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.688433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.688463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.688484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:114024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.688497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.688517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:114032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.688530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.688550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:114040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.688564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.688583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:114048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.688596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.688616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:114056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.688630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.688650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:114064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.688664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.688683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:114072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.688697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.688718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:114080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.688732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.688751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:114088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.688771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.688793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.106 [2024-11-19 17:17:26.688822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:55.106 [2024-11-19 17:17:26.688843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:114104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.107 [2024-11-19 17:17:26.688857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.688876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.107 [2024-11-19 17:17:26.688890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.688910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.107 [2024-11-19 17:17:26.688925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.688944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.107 [2024-11-19 17:17:26.688958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.688978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.107 [2024-11-19 17:17:26.688991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.689012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.107 [2024-11-19 17:17:26.689026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.689045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.107 [2024-11-19 17:17:26.689058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.689077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.107 [2024-11-19 17:17:26.689091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.689110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.107 [2024-11-19 17:17:26.689123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.689143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.107 [2024-11-19 17:17:26.689156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.689176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.107 [2024-11-19 17:17:26.689200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.689227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.107 [2024-11-19 17:17:26.689242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.689262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.107 [2024-11-19 17:17:26.689276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.689296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.107 [2024-11-19 17:17:26.689309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.689328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.107 [2024-11-19 17:17:26.689342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.689361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.107 [2024-11-19 17:17:26.689375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.689394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.107 [2024-11-19 17:17:26.689407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.689427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.107 [2024-11-19 17:17:26.689441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.689460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.107 [2024-11-19 17:17:26.689473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.689492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.107 [2024-11-19 17:17:26.689505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.689535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.107 [2024-11-19 17:17:26.689551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.689571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.107 [2024-11-19 17:17:26.689585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.689604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.107 [2024-11-19 17:17:26.689618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.689644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.107 [2024-11-19 17:17:26.689659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.689679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.107 [2024-11-19 17:17:26.689704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.689724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:114480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.107 [2024-11-19 17:17:26.689748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.689882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.107 [2024-11-19 17:17:26.689903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.689933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:114120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.107 [2024-11-19 17:17:26.689947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.689969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:114128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.107 [2024-11-19 17:17:26.689983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.690005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:114136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.107 [2024-11-19 17:17:26.690018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.690041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:114144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.107 [2024-11-19 17:17:26.690054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.690076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.107 [2024-11-19 17:17:26.690090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.690112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:114160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.107 [2024-11-19 17:17:26.690125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.690146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:114168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.107 [2024-11-19 17:17:26.690160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.690183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:114176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.107 [2024-11-19 17:17:26.690197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.690230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:114184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.107 [2024-11-19 17:17:26.690245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.690273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.107 [2024-11-19 17:17:26.690288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.690310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:114200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.107 [2024-11-19 17:17:26.690323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:55.107 [2024-11-19 17:17:26.690345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:114208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.108 [2024-11-19 17:17:26.690358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.690380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:114216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.108 [2024-11-19 17:17:26.690394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.690415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:114224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.108 [2024-11-19 17:17:26.690430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.690451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.108 [2024-11-19 17:17:26.690465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.690487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.690500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.690522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.690535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.690557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.690570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.690593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.690606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.690628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.690641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.690663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.690687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.690710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.690724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.690758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.690775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.690797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.690811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.690834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.690847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.690875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.690890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.690912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.690926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.690948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.690961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.690983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.690996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.691020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:114608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.691033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.691055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.691069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.691090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:114624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.691104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.691126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.691146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.691170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.691196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.691217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.691232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.691254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:114656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.691269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.691291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.691305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.691327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.691341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.691364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:114680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.691377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.691398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.691412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.691433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.691447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.691474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.691490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.691512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.691525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.691547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.691560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.691582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.691595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.691623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:114736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.691638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.691660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.691673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.691695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.691708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.691730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.108 [2024-11-19 17:17:26.691758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:55.108 [2024-11-19 17:17:26.691797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.109 [2024-11-19 17:17:26.691812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:26.691835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.109 [2024-11-19 17:17:26.691849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:26.691871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.109 [2024-11-19 17:17:26.691885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:26.691909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.109 [2024-11-19 17:17:26.691924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:26.691946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.109 [2024-11-19 17:17:26.691960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:55.109 8546.76 IOPS, 33.39 MiB/s [2024-11-19T17:17:47.642Z] 8071.94 IOPS, 31.53 MiB/s [2024-11-19T17:17:47.642Z] 7647.11 IOPS, 29.87 MiB/s [2024-11-19T17:17:47.642Z] 7264.75 IOPS, 28.38 MiB/s [2024-11-19T17:17:47.642Z] 7214.95 IOPS, 28.18 MiB/s [2024-11-19T17:17:47.642Z] 7289.45 IOPS, 28.47 MiB/s [2024-11-19T17:17:47.642Z] 7357.87 IOPS, 28.74 MiB/s [2024-11-19T17:17:47.642Z] 7448.08 IOPS, 29.09 MiB/s [2024-11-19T17:17:47.642Z] 7544.44 IOPS, 29.47 MiB/s [2024-11-19T17:17:47.642Z] 7633.81 IOPS, 29.82 MiB/s [2024-11-19T17:17:47.642Z] 7698.44 IOPS, 30.07 MiB/s [2024-11-19T17:17:47.642Z] 7741.21 IOPS, 30.24 MiB/s [2024-11-19T17:17:47.642Z] 7750.07 IOPS, 30.27 MiB/s [2024-11-19T17:17:47.642Z] 7763.10 IOPS, 30.32 MiB/s [2024-11-19T17:17:47.642Z] 7797.68 IOPS, 30.46 MiB/s [2024-11-19T17:17:47.642Z] 7855.59 IOPS, 30.69 MiB/s [2024-11-19T17:17:47.642Z] 7898.12 IOPS, 30.85 MiB/s [2024-11-19T17:17:47.642Z] [2024-11-19 17:17:44.330970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.109 [2024-11-19 17:17:44.331045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.331173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.109 [2024-11-19 17:17:44.331227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.333380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.109 [2024-11-19 17:17:44.333415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.333458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.109 [2024-11-19 17:17:44.333473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.333492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.109 [2024-11-19 17:17:44.333506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.333525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.109 [2024-11-19 17:17:44.333538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.333557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.109 [2024-11-19 17:17:44.333571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.333589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.109 [2024-11-19 17:17:44.333603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.333621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.109 [2024-11-19 17:17:44.333635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.333653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.109 [2024-11-19 17:17:44.333666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.333685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.109 [2024-11-19 17:17:44.333698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.333717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.109 [2024-11-19 17:17:44.333730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.333766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.109 [2024-11-19 17:17:44.333793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.333813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.109 [2024-11-19 17:17:44.333841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.334090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.109 [2024-11-19 17:17:44.334114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.334152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.109 [2024-11-19 17:17:44.334168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.334187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.109 [2024-11-19 17:17:44.334201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.334220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.109 [2024-11-19 17:17:44.334233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.334252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.109 [2024-11-19 17:17:44.334265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.334284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.109 [2024-11-19 17:17:44.334297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.334316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.109 [2024-11-19 17:17:44.334329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.334348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.109 [2024-11-19 17:17:44.334362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.334380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.109 [2024-11-19 17:17:44.334394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.334412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.109 [2024-11-19 17:17:44.334427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.334446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.109 [2024-11-19 17:17:44.334460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.334479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.109 [2024-11-19 17:17:44.334492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.334523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.109 [2024-11-19 17:17:44.334538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.334557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.109 [2024-11-19 17:17:44.334571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.334590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.109 [2024-11-19 17:17:44.334604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.334623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.109 [2024-11-19 17:17:44.334637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:55.109 [2024-11-19 17:17:44.334655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.109 [2024-11-19 17:17:44.334669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.334688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.110 [2024-11-19 17:17:44.334701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.334720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.110 [2024-11-19 17:17:44.334733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.334782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.110 [2024-11-19 17:17:44.334801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.336211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.110 [2024-11-19 17:17:44.336242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.336270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.110 [2024-11-19 17:17:44.336287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.336309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.110 [2024-11-19 17:17:44.336324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.336376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.110 [2024-11-19 17:17:44.336391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.336444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.110 [2024-11-19 17:17:44.336460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.336479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.110 [2024-11-19 17:17:44.336494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.336514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.110 [2024-11-19 17:17:44.336528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.336547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:47008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.110 [2024-11-19 17:17:44.336561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.336580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.110 [2024-11-19 17:17:44.336594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.336613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.110 [2024-11-19 17:17:44.336626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.336645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.110 [2024-11-19 17:17:44.336658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.336677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.110 [2024-11-19 17:17:44.336691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.336709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.110 [2024-11-19 17:17:44.336723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.336741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.110 [2024-11-19 17:17:44.336755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.336773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.110 [2024-11-19 17:17:44.336787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.336819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.110 [2024-11-19 17:17:44.336837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.336865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.110 [2024-11-19 17:17:44.336881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.336900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.110 [2024-11-19 17:17:44.336914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.336932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.110 [2024-11-19 17:17:44.336946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.336965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:47088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.110 [2024-11-19 17:17:44.336979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.336998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:47120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.110 [2024-11-19 17:17:44.337012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.337031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:47152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.110 [2024-11-19 17:17:44.337045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.337064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:47184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.110 [2024-11-19 17:17:44.337078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.337097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.110 [2024-11-19 17:17:44.337111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.337131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:47248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.110 [2024-11-19 17:17:44.337145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:55.110 [2024-11-19 17:17:44.337164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.110 [2024-11-19 17:17:44.337178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:55.110 7919.62 IOPS, 30.94 MiB/s [2024-11-19T17:17:47.643Z] 7952.80 IOPS, 31.07 MiB/s [2024-11-19T17:17:47.643Z] 7980.25 IOPS, 31.17 MiB/s [2024-11-19T17:17:47.643Z] Received shutdown signal, test time was about 36.733172 seconds 00:18:55.110 00:18:55.110 Latency(us) 00:18:55.110 [2024-11-19T17:17:47.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.110 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:55.110 Verification LBA range: start 0x0 length 0x4000 00:18:55.110 Nvme0n1 : 36.73 7991.62 31.22 0.00 0.00 15986.38 181.53 4026531.84 00:18:55.110 [2024-11-19T17:17:47.643Z] =================================================================================================================== 00:18:55.110 [2024-11-19T17:17:47.644Z] Total : 7991.62 31.22 0.00 0.00 15986.38 181.53 4026531.84 00:18:55.111 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:55.370 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:18:55.370 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:55.370 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:18:55.370 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:55.370 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:18:55.370 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:55.370 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:18:55.370 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:55.370 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:55.370 rmmod nvme_tcp 00:18:55.370 rmmod nvme_fabrics 00:18:55.370 rmmod nvme_keyring 00:18:55.370 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:55.370 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:18:55.370 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:18:55.370 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 89949 ']' 00:18:55.370 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 89949 00:18:55.370 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 89949 ']' 00:18:55.370 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 89949 00:18:55.370 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:18:55.370 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.370 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89949 00:18:55.370 killing process with pid 89949 00:18:55.370 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:55.370 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:55.370 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89949' 00:18:55.370 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 89949 00:18:55.370 17:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 89949 00:18:55.628 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:55.628 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:55.628 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:55.628 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:18:55.628 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:18:55.628 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:55.628 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:18:55.628 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:55.628 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:55.628 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:55.628 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:55.628 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:55.628 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:55.628 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:55.628 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:55.628 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:55.628 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:55.887 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:55.887 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:55.887 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:55.887 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:55.887 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:55.887 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:55.887 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.887 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:55.887 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.887 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:18:55.887 00:18:55.887 real 0m42.868s 00:18:55.887 user 2m21.082s 00:18:55.887 sys 0m10.713s 00:18:55.887 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:55.887 17:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:55.887 ************************************ 00:18:55.887 END TEST nvmf_host_multipath_status 00:18:55.887 ************************************ 00:18:55.887 17:17:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:55.887 17:17:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:55.887 17:17:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:55.887 17:17:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.887 ************************************ 00:18:55.887 START TEST nvmf_discovery_remove_ifc 00:18:55.887 ************************************ 00:18:55.887 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:56.147 * Looking for test storage... 00:18:56.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:56.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.147 --rc genhtml_branch_coverage=1 00:18:56.147 --rc genhtml_function_coverage=1 00:18:56.147 --rc genhtml_legend=1 00:18:56.147 --rc geninfo_all_blocks=1 00:18:56.147 --rc geninfo_unexecuted_blocks=1 00:18:56.147 00:18:56.147 ' 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:56.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.147 --rc genhtml_branch_coverage=1 00:18:56.147 --rc genhtml_function_coverage=1 00:18:56.147 --rc genhtml_legend=1 00:18:56.147 --rc geninfo_all_blocks=1 00:18:56.147 --rc geninfo_unexecuted_blocks=1 00:18:56.147 00:18:56.147 ' 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:56.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.147 --rc genhtml_branch_coverage=1 00:18:56.147 --rc genhtml_function_coverage=1 00:18:56.147 --rc genhtml_legend=1 00:18:56.147 --rc geninfo_all_blocks=1 00:18:56.147 --rc geninfo_unexecuted_blocks=1 00:18:56.147 00:18:56.147 ' 00:18:56.147 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:56.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.147 --rc genhtml_branch_coverage=1 00:18:56.147 --rc genhtml_function_coverage=1 00:18:56.147 --rc genhtml_legend=1 00:18:56.147 --rc geninfo_all_blocks=1 00:18:56.147 --rc geninfo_unexecuted_blocks=1 00:18:56.147 00:18:56.147 ' 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:56.148 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:56.148 Cannot find device "nvmf_init_br" 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:56.148 Cannot find device "nvmf_init_br2" 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:56.148 Cannot find device "nvmf_tgt_br" 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:56.148 Cannot find device "nvmf_tgt_br2" 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:56.148 Cannot find device "nvmf_init_br" 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:18:56.148 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:56.148 Cannot find device "nvmf_init_br2" 00:18:56.149 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:18:56.149 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:56.149 Cannot find device "nvmf_tgt_br" 00:18:56.149 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:18:56.149 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:56.149 Cannot find device "nvmf_tgt_br2" 00:18:56.407 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:56.408 Cannot find device "nvmf_br" 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:56.408 Cannot find device "nvmf_init_if" 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:56.408 Cannot find device "nvmf_init_if2" 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:56.408 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:56.408 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:56.408 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:56.408 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:18:56.408 00:18:56.408 --- 10.0.0.3 ping statistics --- 00:18:56.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.408 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:56.408 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:56.408 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:18:56.408 00:18:56.408 --- 10.0.0.4 ping statistics --- 00:18:56.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.408 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:56.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:56.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:18:56.408 00:18:56.408 --- 10.0.0.1 ping statistics --- 00:18:56.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.408 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:56.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:56.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:18:56.408 00:18:56.408 --- 10.0.0.2 ping statistics --- 00:18:56.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.408 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:56.408 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:56.667 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:18:56.667 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:56.667 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:56.667 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:56.667 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=91444 00:18:56.667 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 91444 00:18:56.667 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:56.667 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 91444 ']' 00:18:56.667 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.667 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.667 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.667 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.667 17:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:56.667 [2024-11-19 17:17:49.023980] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:18:56.667 [2024-11-19 17:17:49.024085] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.667 [2024-11-19 17:17:49.179180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.925 [2024-11-19 17:17:49.234266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.925 [2024-11-19 17:17:49.234331] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.925 [2024-11-19 17:17:49.234347] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.925 [2024-11-19 17:17:49.234357] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.925 [2024-11-19 17:17:49.234367] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.925 [2024-11-19 17:17:49.234835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.492 17:17:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:57.492 17:17:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:18:57.492 17:17:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:57.492 17:17:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:57.492 17:17:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:57.750 17:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.750 17:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:18:57.750 17:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.750 17:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:57.750 [2024-11-19 17:17:50.052732] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:57.750 [2024-11-19 17:17:50.060866] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:18:57.750 null0 00:18:57.750 [2024-11-19 17:17:50.092739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:57.750 17:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.750 17:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=91494 00:18:57.750 17:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:18:57.750 17:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 91494 /tmp/host.sock 00:18:57.750 17:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 91494 ']' 00:18:57.750 17:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:18:57.750 17:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:57.750 17:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:57.750 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:57.750 17:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:57.750 17:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:57.750 [2024-11-19 17:17:50.176952] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:18:57.750 [2024-11-19 17:17:50.177062] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91494 ] 00:18:58.009 [2024-11-19 17:17:50.325587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.009 [2024-11-19 17:17:50.374707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.009 17:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.009 17:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:18:58.009 17:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:58.009 17:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:18:58.009 17:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.009 17:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:58.009 17:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.009 17:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:18:58.009 17:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.009 17:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:58.267 17:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.267 17:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:18:58.267 17:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.267 17:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:59.203 [2024-11-19 17:17:51.555712] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:59.203 [2024-11-19 17:17:51.555763] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:59.203 [2024-11-19 17:17:51.555782] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:59.203 [2024-11-19 17:17:51.641832] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:18:59.203 [2024-11-19 17:17:51.696165] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:18:59.203 [2024-11-19 17:17:51.696953] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x18bc150:1 started. 00:18:59.203 [2024-11-19 17:17:51.698715] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:59.203 [2024-11-19 17:17:51.698997] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:59.203 [2024-11-19 17:17:51.699039] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:59.203 [2024-11-19 17:17:51.699057] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:59.203 [2024-11-19 17:17:51.699080] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:59.203 17:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.203 17:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:18:59.203 17:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:59.203 17:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:59.203 [2024-11-19 17:17:51.704371] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x18bc150 was disconnected and freed. delete nvme_qpair. 00:18:59.203 17:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.203 17:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:59.203 17:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:59.203 17:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:59.203 17:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:59.203 17:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.461 17:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:18:59.461 17:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:18:59.461 17:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:18:59.461 17:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:18:59.461 17:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:59.461 17:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:59.461 17:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.461 17:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:59.461 17:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:59.461 17:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:59.461 17:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:59.461 17:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.461 17:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:59.461 17:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:00.397 17:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:00.397 17:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:00.397 17:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:00.397 17:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.397 17:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:00.397 17:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:00.397 17:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:00.397 17:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.397 17:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:00.397 17:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:01.773 17:17:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:01.773 17:17:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:01.773 17:17:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:01.773 17:17:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.773 17:17:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:01.773 17:17:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:01.773 17:17:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:01.773 17:17:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.773 17:17:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:01.773 17:17:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:02.709 17:17:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:02.709 17:17:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:02.709 17:17:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.709 17:17:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:02.709 17:17:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:02.709 17:17:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:02.709 17:17:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:02.709 17:17:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.709 17:17:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:02.709 17:17:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:03.644 17:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:03.644 17:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:03.644 17:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:03.644 17:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.644 17:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:03.644 17:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:03.644 17:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:03.644 17:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.644 17:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:03.644 17:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:04.581 17:17:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:04.581 17:17:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:04.581 17:17:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.581 17:17:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:04.581 17:17:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:04.581 17:17:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:04.581 17:17:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:04.581 17:17:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.839 17:17:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:04.839 17:17:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:04.839 [2024-11-19 17:17:57.137060] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:19:04.839 [2024-11-19 17:17:57.137135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.839 [2024-11-19 17:17:57.137160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.840 [2024-11-19 17:17:57.137171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.840 [2024-11-19 17:17:57.137179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.840 [2024-11-19 17:17:57.137188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.840 [2024-11-19 17:17:57.137196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.840 [2024-11-19 17:17:57.137214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.840 [2024-11-19 17:17:57.137222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.840 [2024-11-19 17:17:57.137231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.840 [2024-11-19 17:17:57.137239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.840 [2024-11-19 17:17:57.137247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899520 is same with the state(6) to be set 00:19:04.840 [2024-11-19 17:17:57.147056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1899520 (9): Bad file descriptor 00:19:04.840 [2024-11-19 17:17:57.157073] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:04.840 [2024-11-19 17:17:57.157259] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:04.840 [2024-11-19 17:17:57.157375] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:04.840 [2024-11-19 17:17:57.157493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:04.840 [2024-11-19 17:17:57.157632] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:05.775 17:17:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:05.775 17:17:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:05.775 17:17:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:05.775 17:17:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.775 17:17:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:05.775 17:17:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:05.775 17:17:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:05.775 [2024-11-19 17:17:58.213862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:19:05.775 [2024-11-19 17:17:58.213947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1899520 with addr=10.0.0.3, port=4420 00:19:05.775 [2024-11-19 17:17:58.213978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899520 is same with the state(6) to be set 00:19:05.775 [2024-11-19 17:17:58.214055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1899520 (9): Bad file descriptor 00:19:05.775 [2024-11-19 17:17:58.214915] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:19:05.775 [2024-11-19 17:17:58.214981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:05.775 [2024-11-19 17:17:58.215005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:05.775 [2024-11-19 17:17:58.215027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:05.775 [2024-11-19 17:17:58.215048] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:05.775 [2024-11-19 17:17:58.215062] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:05.775 [2024-11-19 17:17:58.215074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:05.775 [2024-11-19 17:17:58.215095] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:05.775 [2024-11-19 17:17:58.215107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:05.776 17:17:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.776 17:17:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:05.776 17:17:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:06.711 [2024-11-19 17:17:59.215165] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:06.711 [2024-11-19 17:17:59.215583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:06.711 [2024-11-19 17:17:59.215629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:06.711 [2024-11-19 17:17:59.215642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:06.711 [2024-11-19 17:17:59.215652] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:19:06.711 [2024-11-19 17:17:59.215663] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:06.711 [2024-11-19 17:17:59.215670] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:06.711 [2024-11-19 17:17:59.215675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:06.711 [2024-11-19 17:17:59.215717] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:19:06.711 [2024-11-19 17:17:59.215797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:06.711 [2024-11-19 17:17:59.215813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.711 [2024-11-19 17:17:59.215828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:06.711 [2024-11-19 17:17:59.215838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.711 [2024-11-19 17:17:59.215848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:06.711 [2024-11-19 17:17:59.215857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.711 [2024-11-19 17:17:59.215867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:06.711 [2024-11-19 17:17:59.215876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.711 [2024-11-19 17:17:59.215886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:06.711 [2024-11-19 17:17:59.215894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.711 [2024-11-19 17:17:59.215920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:19:06.712 [2024-11-19 17:17:59.216035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18262d0 (9): Bad file descriptor 00:19:06.712 [2024-11-19 17:17:59.217037] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:19:06.712 [2024-11-19 17:17:59.217052] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:19:06.970 17:17:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:06.970 17:17:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:06.970 17:17:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:06.970 17:17:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:06.970 17:17:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.970 17:17:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:06.970 17:17:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:06.970 17:17:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.970 17:17:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:19:06.970 17:17:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:06.970 17:17:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:06.970 17:17:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:19:06.970 17:17:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:06.970 17:17:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:06.970 17:17:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:06.970 17:17:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.970 17:17:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:06.970 17:17:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:06.970 17:17:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:06.970 17:17:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.970 17:17:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:06.970 17:17:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:07.906 17:18:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:07.906 17:18:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:07.906 17:18:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:07.906 17:18:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:07.906 17:18:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:07.906 17:18:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.906 17:18:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:07.906 17:18:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.906 17:18:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:07.906 17:18:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:08.842 [2024-11-19 17:18:01.227481] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:08.842 [2024-11-19 17:18:01.227753] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:08.842 [2024-11-19 17:18:01.227792] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:08.842 [2024-11-19 17:18:01.313619] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:19:08.842 [2024-11-19 17:18:01.368021] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:19:08.842 [2024-11-19 17:18:01.368814] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1893100:1 started. 00:19:08.842 [2024-11-19 17:18:01.370515] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:08.842 [2024-11-19 17:18:01.370563] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:08.842 [2024-11-19 17:18:01.370592] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:08.842 [2024-11-19 17:18:01.370610] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:19:08.842 [2024-11-19 17:18:01.370621] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:09.100 [2024-11-19 17:18:01.375942] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1893100 was disconnected and freed. delete nvme_qpair. 00:19:09.100 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:09.100 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:09.100 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.100 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:09.100 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:09.100 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:09.100 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:09.100 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.100 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:19:09.100 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:19:09.100 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 91494 00:19:09.100 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 91494 ']' 00:19:09.100 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 91494 00:19:09.100 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:19:09.100 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.100 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91494 00:19:09.100 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:09.100 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:09.100 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91494' 00:19:09.100 killing process with pid 91494 00:19:09.100 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 91494 00:19:09.101 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 91494 00:19:09.359 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:19:09.359 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:09.359 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:19:09.359 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:09.359 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:19:09.359 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:09.359 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:09.359 rmmod nvme_tcp 00:19:09.359 rmmod nvme_fabrics 00:19:09.359 rmmod nvme_keyring 00:19:09.359 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:09.359 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:19:09.359 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:19:09.359 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 91444 ']' 00:19:09.359 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 91444 00:19:09.359 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 91444 ']' 00:19:09.359 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 91444 00:19:09.359 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:19:09.359 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.359 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91444 00:19:09.617 killing process with pid 91444 00:19:09.617 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:09.617 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:09.617 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91444' 00:19:09.617 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 91444 00:19:09.617 17:18:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 91444 00:19:09.617 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:09.617 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:09.617 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:09.617 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:19:09.617 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:19:09.617 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:09.617 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:19:09.617 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:09.617 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:09.617 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:09.617 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:09.876 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:09.876 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:09.876 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:09.876 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:09.876 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:09.876 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:09.876 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:09.876 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:09.876 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:09.876 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:09.876 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:09.876 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:09.876 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.876 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:09.876 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.876 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:19:09.876 00:19:09.876 real 0m14.005s 00:19:09.876 user 0m24.342s 00:19:09.876 sys 0m1.653s 00:19:09.876 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:09.876 17:18:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:09.876 ************************************ 00:19:09.876 END TEST nvmf_discovery_remove_ifc 00:19:09.876 ************************************ 00:19:10.135 17:18:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:10.135 17:18:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:10.135 17:18:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.136 ************************************ 00:19:10.136 START TEST nvmf_identify_kernel_target 00:19:10.136 ************************************ 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:10.136 * Looking for test storage... 00:19:10.136 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:10.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.136 --rc genhtml_branch_coverage=1 00:19:10.136 --rc genhtml_function_coverage=1 00:19:10.136 --rc genhtml_legend=1 00:19:10.136 --rc geninfo_all_blocks=1 00:19:10.136 --rc geninfo_unexecuted_blocks=1 00:19:10.136 00:19:10.136 ' 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:10.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.136 --rc genhtml_branch_coverage=1 00:19:10.136 --rc genhtml_function_coverage=1 00:19:10.136 --rc genhtml_legend=1 00:19:10.136 --rc geninfo_all_blocks=1 00:19:10.136 --rc geninfo_unexecuted_blocks=1 00:19:10.136 00:19:10.136 ' 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:10.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.136 --rc genhtml_branch_coverage=1 00:19:10.136 --rc genhtml_function_coverage=1 00:19:10.136 --rc genhtml_legend=1 00:19:10.136 --rc geninfo_all_blocks=1 00:19:10.136 --rc geninfo_unexecuted_blocks=1 00:19:10.136 00:19:10.136 ' 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:10.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.136 --rc genhtml_branch_coverage=1 00:19:10.136 --rc genhtml_function_coverage=1 00:19:10.136 --rc genhtml_legend=1 00:19:10.136 --rc geninfo_all_blocks=1 00:19:10.136 --rc geninfo_unexecuted_blocks=1 00:19:10.136 00:19:10.136 ' 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.136 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:10.137 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:10.137 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:10.396 Cannot find device "nvmf_init_br" 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:10.396 Cannot find device "nvmf_init_br2" 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:10.396 Cannot find device "nvmf_tgt_br" 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:10.396 Cannot find device "nvmf_tgt_br2" 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:10.396 Cannot find device "nvmf_init_br" 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:10.396 Cannot find device "nvmf_init_br2" 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:10.396 Cannot find device "nvmf_tgt_br" 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:10.396 Cannot find device "nvmf_tgt_br2" 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:10.396 Cannot find device "nvmf_br" 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:10.396 Cannot find device "nvmf_init_if" 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:10.396 Cannot find device "nvmf_init_if2" 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:10.396 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:10.396 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:10.396 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:10.397 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:10.397 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:10.397 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:10.397 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:10.397 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:10.397 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:10.397 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:10.397 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:10.397 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:10.397 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:10.397 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:10.397 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:10.397 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:10.397 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:10.655 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:10.655 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:10.655 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:10.655 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:10.655 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:10.655 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:10.655 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:10.655 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:10.655 17:18:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:10.655 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:10.655 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:10.655 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:19:10.655 00:19:10.655 --- 10.0.0.3 ping statistics --- 00:19:10.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.655 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:19:10.655 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:10.655 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:10.655 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:19:10.655 00:19:10.656 --- 10.0.0.4 ping statistics --- 00:19:10.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.656 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:10.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:10.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:19:10.656 00:19:10.656 --- 10.0.0.1 ping statistics --- 00:19:10.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.656 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:10.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:10.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:19:10.656 00:19:10.656 --- 10.0.0.2 ping statistics --- 00:19:10.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.656 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:10.656 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:10.914 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:11.172 Waiting for block devices as requested 00:19:11.172 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:11.172 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:11.172 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:11.172 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:11.172 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:19:11.172 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:11.172 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:11.172 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:11.172 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:19:11.172 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:11.172 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:11.430 No valid GPT data, bailing 00:19:11.430 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:11.430 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:11.431 No valid GPT data, bailing 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:11.431 No valid GPT data, bailing 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:19:11.431 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:11.431 No valid GPT data, bailing 00:19:11.689 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:11.690 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:11.690 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:11.690 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:19:11.690 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:19:11.690 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:11.690 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:11.690 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:11.690 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:11.690 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:19:11.690 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:19:11.690 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:19:11.690 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:19:11.690 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:19:11.690 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:19:11.690 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:19:11.690 17:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:11.690 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -a 10.0.0.1 -t tcp -s 4420 00:19:11.690 00:19:11.690 Discovery Log Number of Records 2, Generation counter 2 00:19:11.690 =====Discovery Log Entry 0====== 00:19:11.690 trtype: tcp 00:19:11.690 adrfam: ipv4 00:19:11.690 subtype: current discovery subsystem 00:19:11.690 treq: not specified, sq flow control disable supported 00:19:11.690 portid: 1 00:19:11.690 trsvcid: 4420 00:19:11.690 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:11.690 traddr: 10.0.0.1 00:19:11.690 eflags: none 00:19:11.690 sectype: none 00:19:11.690 =====Discovery Log Entry 1====== 00:19:11.690 trtype: tcp 00:19:11.690 adrfam: ipv4 00:19:11.690 subtype: nvme subsystem 00:19:11.690 treq: not specified, sq flow control disable supported 00:19:11.690 portid: 1 00:19:11.690 trsvcid: 4420 00:19:11.690 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:11.690 traddr: 10.0.0.1 00:19:11.690 eflags: none 00:19:11.690 sectype: none 00:19:11.690 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:19:11.690 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:11.690 ===================================================== 00:19:11.690 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:11.690 ===================================================== 00:19:11.690 Controller Capabilities/Features 00:19:11.690 ================================ 00:19:11.690 Vendor ID: 0000 00:19:11.690 Subsystem Vendor ID: 0000 00:19:11.690 Serial Number: 34254f733ccd0a0afcc5 00:19:11.690 Model Number: Linux 00:19:11.690 Firmware Version: 6.8.9-20 00:19:11.690 Recommended Arb Burst: 0 00:19:11.690 IEEE OUI Identifier: 00 00 00 00:19:11.690 Multi-path I/O 00:19:11.690 May have multiple subsystem ports: No 00:19:11.690 May have multiple controllers: No 00:19:11.690 Associated with SR-IOV VF: No 00:19:11.690 Max Data Transfer Size: Unlimited 00:19:11.690 Max Number of Namespaces: 0 00:19:11.690 Max Number of I/O Queues: 1024 00:19:11.690 NVMe Specification Version (VS): 1.3 00:19:11.690 NVMe Specification Version (Identify): 1.3 00:19:11.690 Maximum Queue Entries: 1024 00:19:11.690 Contiguous Queues Required: No 00:19:11.690 Arbitration Mechanisms Supported 00:19:11.690 Weighted Round Robin: Not Supported 00:19:11.690 Vendor Specific: Not Supported 00:19:11.690 Reset Timeout: 7500 ms 00:19:11.690 Doorbell Stride: 4 bytes 00:19:11.690 NVM Subsystem Reset: Not Supported 00:19:11.690 Command Sets Supported 00:19:11.690 NVM Command Set: Supported 00:19:11.690 Boot Partition: Not Supported 00:19:11.690 Memory Page Size Minimum: 4096 bytes 00:19:11.690 Memory Page Size Maximum: 4096 bytes 00:19:11.690 Persistent Memory Region: Not Supported 00:19:11.690 Optional Asynchronous Events Supported 00:19:11.690 Namespace Attribute Notices: Not Supported 00:19:11.690 Firmware Activation Notices: Not Supported 00:19:11.690 ANA Change Notices: Not Supported 00:19:11.690 PLE Aggregate Log Change Notices: Not Supported 00:19:11.690 LBA Status Info Alert Notices: Not Supported 00:19:11.690 EGE Aggregate Log Change Notices: Not Supported 00:19:11.690 Normal NVM Subsystem Shutdown event: Not Supported 00:19:11.690 Zone Descriptor Change Notices: Not Supported 00:19:11.690 Discovery Log Change Notices: Supported 00:19:11.690 Controller Attributes 00:19:11.690 128-bit Host Identifier: Not Supported 00:19:11.690 Non-Operational Permissive Mode: Not Supported 00:19:11.690 NVM Sets: Not Supported 00:19:11.690 Read Recovery Levels: Not Supported 00:19:11.690 Endurance Groups: Not Supported 00:19:11.690 Predictable Latency Mode: Not Supported 00:19:11.690 Traffic Based Keep ALive: Not Supported 00:19:11.690 Namespace Granularity: Not Supported 00:19:11.690 SQ Associations: Not Supported 00:19:11.690 UUID List: Not Supported 00:19:11.690 Multi-Domain Subsystem: Not Supported 00:19:11.690 Fixed Capacity Management: Not Supported 00:19:11.690 Variable Capacity Management: Not Supported 00:19:11.690 Delete Endurance Group: Not Supported 00:19:11.690 Delete NVM Set: Not Supported 00:19:11.690 Extended LBA Formats Supported: Not Supported 00:19:11.690 Flexible Data Placement Supported: Not Supported 00:19:11.690 00:19:11.690 Controller Memory Buffer Support 00:19:11.690 ================================ 00:19:11.690 Supported: No 00:19:11.690 00:19:11.690 Persistent Memory Region Support 00:19:11.690 ================================ 00:19:11.690 Supported: No 00:19:11.690 00:19:11.690 Admin Command Set Attributes 00:19:11.690 ============================ 00:19:11.690 Security Send/Receive: Not Supported 00:19:11.690 Format NVM: Not Supported 00:19:11.690 Firmware Activate/Download: Not Supported 00:19:11.690 Namespace Management: Not Supported 00:19:11.690 Device Self-Test: Not Supported 00:19:11.690 Directives: Not Supported 00:19:11.690 NVMe-MI: Not Supported 00:19:11.690 Virtualization Management: Not Supported 00:19:11.690 Doorbell Buffer Config: Not Supported 00:19:11.690 Get LBA Status Capability: Not Supported 00:19:11.690 Command & Feature Lockdown Capability: Not Supported 00:19:11.690 Abort Command Limit: 1 00:19:11.690 Async Event Request Limit: 1 00:19:11.690 Number of Firmware Slots: N/A 00:19:11.690 Firmware Slot 1 Read-Only: N/A 00:19:11.950 Firmware Activation Without Reset: N/A 00:19:11.950 Multiple Update Detection Support: N/A 00:19:11.950 Firmware Update Granularity: No Information Provided 00:19:11.950 Per-Namespace SMART Log: No 00:19:11.950 Asymmetric Namespace Access Log Page: Not Supported 00:19:11.950 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:11.950 Command Effects Log Page: Not Supported 00:19:11.950 Get Log Page Extended Data: Supported 00:19:11.950 Telemetry Log Pages: Not Supported 00:19:11.950 Persistent Event Log Pages: Not Supported 00:19:11.950 Supported Log Pages Log Page: May Support 00:19:11.950 Commands Supported & Effects Log Page: Not Supported 00:19:11.950 Feature Identifiers & Effects Log Page:May Support 00:19:11.950 NVMe-MI Commands & Effects Log Page: May Support 00:19:11.950 Data Area 4 for Telemetry Log: Not Supported 00:19:11.950 Error Log Page Entries Supported: 1 00:19:11.950 Keep Alive: Not Supported 00:19:11.950 00:19:11.950 NVM Command Set Attributes 00:19:11.950 ========================== 00:19:11.950 Submission Queue Entry Size 00:19:11.950 Max: 1 00:19:11.950 Min: 1 00:19:11.950 Completion Queue Entry Size 00:19:11.950 Max: 1 00:19:11.950 Min: 1 00:19:11.950 Number of Namespaces: 0 00:19:11.950 Compare Command: Not Supported 00:19:11.950 Write Uncorrectable Command: Not Supported 00:19:11.950 Dataset Management Command: Not Supported 00:19:11.950 Write Zeroes Command: Not Supported 00:19:11.950 Set Features Save Field: Not Supported 00:19:11.950 Reservations: Not Supported 00:19:11.950 Timestamp: Not Supported 00:19:11.950 Copy: Not Supported 00:19:11.950 Volatile Write Cache: Not Present 00:19:11.950 Atomic Write Unit (Normal): 1 00:19:11.950 Atomic Write Unit (PFail): 1 00:19:11.950 Atomic Compare & Write Unit: 1 00:19:11.950 Fused Compare & Write: Not Supported 00:19:11.950 Scatter-Gather List 00:19:11.950 SGL Command Set: Supported 00:19:11.950 SGL Keyed: Not Supported 00:19:11.950 SGL Bit Bucket Descriptor: Not Supported 00:19:11.950 SGL Metadata Pointer: Not Supported 00:19:11.950 Oversized SGL: Not Supported 00:19:11.950 SGL Metadata Address: Not Supported 00:19:11.950 SGL Offset: Supported 00:19:11.950 Transport SGL Data Block: Not Supported 00:19:11.950 Replay Protected Memory Block: Not Supported 00:19:11.950 00:19:11.950 Firmware Slot Information 00:19:11.950 ========================= 00:19:11.950 Active slot: 0 00:19:11.950 00:19:11.950 00:19:11.950 Error Log 00:19:11.950 ========= 00:19:11.950 00:19:11.950 Active Namespaces 00:19:11.950 ================= 00:19:11.950 Discovery Log Page 00:19:11.950 ================== 00:19:11.950 Generation Counter: 2 00:19:11.950 Number of Records: 2 00:19:11.950 Record Format: 0 00:19:11.950 00:19:11.950 Discovery Log Entry 0 00:19:11.950 ---------------------- 00:19:11.950 Transport Type: 3 (TCP) 00:19:11.950 Address Family: 1 (IPv4) 00:19:11.950 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:11.950 Entry Flags: 00:19:11.950 Duplicate Returned Information: 0 00:19:11.950 Explicit Persistent Connection Support for Discovery: 0 00:19:11.950 Transport Requirements: 00:19:11.950 Secure Channel: Not Specified 00:19:11.950 Port ID: 1 (0x0001) 00:19:11.950 Controller ID: 65535 (0xffff) 00:19:11.950 Admin Max SQ Size: 32 00:19:11.950 Transport Service Identifier: 4420 00:19:11.950 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:11.950 Transport Address: 10.0.0.1 00:19:11.950 Discovery Log Entry 1 00:19:11.950 ---------------------- 00:19:11.950 Transport Type: 3 (TCP) 00:19:11.950 Address Family: 1 (IPv4) 00:19:11.950 Subsystem Type: 2 (NVM Subsystem) 00:19:11.950 Entry Flags: 00:19:11.950 Duplicate Returned Information: 0 00:19:11.950 Explicit Persistent Connection Support for Discovery: 0 00:19:11.950 Transport Requirements: 00:19:11.950 Secure Channel: Not Specified 00:19:11.950 Port ID: 1 (0x0001) 00:19:11.950 Controller ID: 65535 (0xffff) 00:19:11.950 Admin Max SQ Size: 32 00:19:11.950 Transport Service Identifier: 4420 00:19:11.950 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:11.950 Transport Address: 10.0.0.1 00:19:11.950 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:11.950 get_feature(0x01) failed 00:19:11.950 get_feature(0x02) failed 00:19:11.950 get_feature(0x04) failed 00:19:11.950 ===================================================== 00:19:11.950 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:11.950 ===================================================== 00:19:11.950 Controller Capabilities/Features 00:19:11.950 ================================ 00:19:11.950 Vendor ID: 0000 00:19:11.950 Subsystem Vendor ID: 0000 00:19:11.950 Serial Number: 4794d2e763c706d54d0d 00:19:11.950 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:11.950 Firmware Version: 6.8.9-20 00:19:11.950 Recommended Arb Burst: 6 00:19:11.950 IEEE OUI Identifier: 00 00 00 00:19:11.950 Multi-path I/O 00:19:11.950 May have multiple subsystem ports: Yes 00:19:11.950 May have multiple controllers: Yes 00:19:11.950 Associated with SR-IOV VF: No 00:19:11.950 Max Data Transfer Size: Unlimited 00:19:11.950 Max Number of Namespaces: 1024 00:19:11.950 Max Number of I/O Queues: 128 00:19:11.950 NVMe Specification Version (VS): 1.3 00:19:11.950 NVMe Specification Version (Identify): 1.3 00:19:11.950 Maximum Queue Entries: 1024 00:19:11.950 Contiguous Queues Required: No 00:19:11.950 Arbitration Mechanisms Supported 00:19:11.950 Weighted Round Robin: Not Supported 00:19:11.950 Vendor Specific: Not Supported 00:19:11.950 Reset Timeout: 7500 ms 00:19:11.950 Doorbell Stride: 4 bytes 00:19:11.950 NVM Subsystem Reset: Not Supported 00:19:11.950 Command Sets Supported 00:19:11.950 NVM Command Set: Supported 00:19:11.950 Boot Partition: Not Supported 00:19:11.950 Memory Page Size Minimum: 4096 bytes 00:19:11.950 Memory Page Size Maximum: 4096 bytes 00:19:11.950 Persistent Memory Region: Not Supported 00:19:11.950 Optional Asynchronous Events Supported 00:19:11.950 Namespace Attribute Notices: Supported 00:19:11.950 Firmware Activation Notices: Not Supported 00:19:11.950 ANA Change Notices: Supported 00:19:11.950 PLE Aggregate Log Change Notices: Not Supported 00:19:11.950 LBA Status Info Alert Notices: Not Supported 00:19:11.950 EGE Aggregate Log Change Notices: Not Supported 00:19:11.950 Normal NVM Subsystem Shutdown event: Not Supported 00:19:11.950 Zone Descriptor Change Notices: Not Supported 00:19:11.950 Discovery Log Change Notices: Not Supported 00:19:11.950 Controller Attributes 00:19:11.950 128-bit Host Identifier: Supported 00:19:11.950 Non-Operational Permissive Mode: Not Supported 00:19:11.950 NVM Sets: Not Supported 00:19:11.950 Read Recovery Levels: Not Supported 00:19:11.950 Endurance Groups: Not Supported 00:19:11.950 Predictable Latency Mode: Not Supported 00:19:11.950 Traffic Based Keep ALive: Supported 00:19:11.950 Namespace Granularity: Not Supported 00:19:11.950 SQ Associations: Not Supported 00:19:11.950 UUID List: Not Supported 00:19:11.950 Multi-Domain Subsystem: Not Supported 00:19:11.950 Fixed Capacity Management: Not Supported 00:19:11.950 Variable Capacity Management: Not Supported 00:19:11.950 Delete Endurance Group: Not Supported 00:19:11.950 Delete NVM Set: Not Supported 00:19:11.950 Extended LBA Formats Supported: Not Supported 00:19:11.950 Flexible Data Placement Supported: Not Supported 00:19:11.950 00:19:11.950 Controller Memory Buffer Support 00:19:11.950 ================================ 00:19:11.950 Supported: No 00:19:11.950 00:19:11.950 Persistent Memory Region Support 00:19:11.950 ================================ 00:19:11.950 Supported: No 00:19:11.950 00:19:11.950 Admin Command Set Attributes 00:19:11.950 ============================ 00:19:11.950 Security Send/Receive: Not Supported 00:19:11.950 Format NVM: Not Supported 00:19:11.950 Firmware Activate/Download: Not Supported 00:19:11.950 Namespace Management: Not Supported 00:19:11.950 Device Self-Test: Not Supported 00:19:11.950 Directives: Not Supported 00:19:11.950 NVMe-MI: Not Supported 00:19:11.950 Virtualization Management: Not Supported 00:19:11.950 Doorbell Buffer Config: Not Supported 00:19:11.950 Get LBA Status Capability: Not Supported 00:19:11.950 Command & Feature Lockdown Capability: Not Supported 00:19:11.950 Abort Command Limit: 4 00:19:11.950 Async Event Request Limit: 4 00:19:11.951 Number of Firmware Slots: N/A 00:19:11.951 Firmware Slot 1 Read-Only: N/A 00:19:11.951 Firmware Activation Without Reset: N/A 00:19:11.951 Multiple Update Detection Support: N/A 00:19:11.951 Firmware Update Granularity: No Information Provided 00:19:11.951 Per-Namespace SMART Log: Yes 00:19:11.951 Asymmetric Namespace Access Log Page: Supported 00:19:11.951 ANA Transition Time : 10 sec 00:19:11.951 00:19:11.951 Asymmetric Namespace Access Capabilities 00:19:11.951 ANA Optimized State : Supported 00:19:11.951 ANA Non-Optimized State : Supported 00:19:11.951 ANA Inaccessible State : Supported 00:19:11.951 ANA Persistent Loss State : Supported 00:19:11.951 ANA Change State : Supported 00:19:11.951 ANAGRPID is not changed : No 00:19:11.951 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:11.951 00:19:11.951 ANA Group Identifier Maximum : 128 00:19:11.951 Number of ANA Group Identifiers : 128 00:19:11.951 Max Number of Allowed Namespaces : 1024 00:19:11.951 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:11.951 Command Effects Log Page: Supported 00:19:11.951 Get Log Page Extended Data: Supported 00:19:11.951 Telemetry Log Pages: Not Supported 00:19:11.951 Persistent Event Log Pages: Not Supported 00:19:11.951 Supported Log Pages Log Page: May Support 00:19:11.951 Commands Supported & Effects Log Page: Not Supported 00:19:11.951 Feature Identifiers & Effects Log Page:May Support 00:19:11.951 NVMe-MI Commands & Effects Log Page: May Support 00:19:11.951 Data Area 4 for Telemetry Log: Not Supported 00:19:11.951 Error Log Page Entries Supported: 128 00:19:11.951 Keep Alive: Supported 00:19:11.951 Keep Alive Granularity: 1000 ms 00:19:11.951 00:19:11.951 NVM Command Set Attributes 00:19:11.951 ========================== 00:19:11.951 Submission Queue Entry Size 00:19:11.951 Max: 64 00:19:11.951 Min: 64 00:19:11.951 Completion Queue Entry Size 00:19:11.951 Max: 16 00:19:11.951 Min: 16 00:19:11.951 Number of Namespaces: 1024 00:19:11.951 Compare Command: Not Supported 00:19:11.951 Write Uncorrectable Command: Not Supported 00:19:11.951 Dataset Management Command: Supported 00:19:11.951 Write Zeroes Command: Supported 00:19:11.951 Set Features Save Field: Not Supported 00:19:11.951 Reservations: Not Supported 00:19:11.951 Timestamp: Not Supported 00:19:11.951 Copy: Not Supported 00:19:11.951 Volatile Write Cache: Present 00:19:11.951 Atomic Write Unit (Normal): 1 00:19:11.951 Atomic Write Unit (PFail): 1 00:19:11.951 Atomic Compare & Write Unit: 1 00:19:11.951 Fused Compare & Write: Not Supported 00:19:11.951 Scatter-Gather List 00:19:11.951 SGL Command Set: Supported 00:19:11.951 SGL Keyed: Not Supported 00:19:11.951 SGL Bit Bucket Descriptor: Not Supported 00:19:11.951 SGL Metadata Pointer: Not Supported 00:19:11.951 Oversized SGL: Not Supported 00:19:11.951 SGL Metadata Address: Not Supported 00:19:11.951 SGL Offset: Supported 00:19:11.951 Transport SGL Data Block: Not Supported 00:19:11.951 Replay Protected Memory Block: Not Supported 00:19:11.951 00:19:11.951 Firmware Slot Information 00:19:11.951 ========================= 00:19:11.951 Active slot: 0 00:19:11.951 00:19:11.951 Asymmetric Namespace Access 00:19:11.951 =========================== 00:19:11.951 Change Count : 0 00:19:11.951 Number of ANA Group Descriptors : 1 00:19:11.951 ANA Group Descriptor : 0 00:19:11.951 ANA Group ID : 1 00:19:11.951 Number of NSID Values : 1 00:19:11.951 Change Count : 0 00:19:11.951 ANA State : 1 00:19:11.951 Namespace Identifier : 1 00:19:11.951 00:19:11.951 Commands Supported and Effects 00:19:11.951 ============================== 00:19:11.951 Admin Commands 00:19:11.951 -------------- 00:19:11.951 Get Log Page (02h): Supported 00:19:11.951 Identify (06h): Supported 00:19:11.951 Abort (08h): Supported 00:19:11.951 Set Features (09h): Supported 00:19:11.951 Get Features (0Ah): Supported 00:19:11.951 Asynchronous Event Request (0Ch): Supported 00:19:11.951 Keep Alive (18h): Supported 00:19:11.951 I/O Commands 00:19:11.951 ------------ 00:19:11.951 Flush (00h): Supported 00:19:11.951 Write (01h): Supported LBA-Change 00:19:11.951 Read (02h): Supported 00:19:11.951 Write Zeroes (08h): Supported LBA-Change 00:19:11.951 Dataset Management (09h): Supported 00:19:11.951 00:19:11.951 Error Log 00:19:11.951 ========= 00:19:11.951 Entry: 0 00:19:11.951 Error Count: 0x3 00:19:11.951 Submission Queue Id: 0x0 00:19:11.951 Command Id: 0x5 00:19:11.951 Phase Bit: 0 00:19:11.951 Status Code: 0x2 00:19:11.951 Status Code Type: 0x0 00:19:11.951 Do Not Retry: 1 00:19:11.951 Error Location: 0x28 00:19:11.951 LBA: 0x0 00:19:11.951 Namespace: 0x0 00:19:11.951 Vendor Log Page: 0x0 00:19:11.951 ----------- 00:19:11.951 Entry: 1 00:19:11.951 Error Count: 0x2 00:19:11.951 Submission Queue Id: 0x0 00:19:11.951 Command Id: 0x5 00:19:11.951 Phase Bit: 0 00:19:11.951 Status Code: 0x2 00:19:11.951 Status Code Type: 0x0 00:19:11.951 Do Not Retry: 1 00:19:11.951 Error Location: 0x28 00:19:11.951 LBA: 0x0 00:19:11.951 Namespace: 0x0 00:19:11.951 Vendor Log Page: 0x0 00:19:11.951 ----------- 00:19:11.951 Entry: 2 00:19:11.951 Error Count: 0x1 00:19:11.951 Submission Queue Id: 0x0 00:19:11.951 Command Id: 0x4 00:19:11.951 Phase Bit: 0 00:19:11.951 Status Code: 0x2 00:19:11.951 Status Code Type: 0x0 00:19:11.951 Do Not Retry: 1 00:19:11.951 Error Location: 0x28 00:19:11.951 LBA: 0x0 00:19:11.951 Namespace: 0x0 00:19:11.951 Vendor Log Page: 0x0 00:19:11.951 00:19:11.951 Number of Queues 00:19:11.951 ================ 00:19:11.951 Number of I/O Submission Queues: 128 00:19:11.951 Number of I/O Completion Queues: 128 00:19:11.951 00:19:11.951 ZNS Specific Controller Data 00:19:11.951 ============================ 00:19:11.951 Zone Append Size Limit: 0 00:19:11.951 00:19:11.951 00:19:11.951 Active Namespaces 00:19:11.951 ================= 00:19:11.951 get_feature(0x05) failed 00:19:11.951 Namespace ID:1 00:19:11.951 Command Set Identifier: NVM (00h) 00:19:11.951 Deallocate: Supported 00:19:11.951 Deallocated/Unwritten Error: Not Supported 00:19:11.951 Deallocated Read Value: Unknown 00:19:11.951 Deallocate in Write Zeroes: Not Supported 00:19:11.951 Deallocated Guard Field: 0xFFFF 00:19:11.951 Flush: Supported 00:19:11.951 Reservation: Not Supported 00:19:11.951 Namespace Sharing Capabilities: Multiple Controllers 00:19:11.951 Size (in LBAs): 1310720 (5GiB) 00:19:11.951 Capacity (in LBAs): 1310720 (5GiB) 00:19:11.951 Utilization (in LBAs): 1310720 (5GiB) 00:19:11.951 UUID: 9435d02d-9cd3-4579-abe3-00920612c79e 00:19:11.951 Thin Provisioning: Not Supported 00:19:11.951 Per-NS Atomic Units: Yes 00:19:11.951 Atomic Boundary Size (Normal): 0 00:19:11.951 Atomic Boundary Size (PFail): 0 00:19:11.951 Atomic Boundary Offset: 0 00:19:11.951 NGUID/EUI64 Never Reused: No 00:19:11.951 ANA group ID: 1 00:19:11.951 Namespace Write Protected: No 00:19:11.951 Number of LBA Formats: 1 00:19:11.951 Current LBA Format: LBA Format #00 00:19:11.951 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:19:11.951 00:19:11.951 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:11.951 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:11.951 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:19:12.210 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:12.210 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:19:12.210 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:12.210 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:12.210 rmmod nvme_tcp 00:19:12.210 rmmod nvme_fabrics 00:19:12.210 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:12.210 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:19:12.210 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:19:12.210 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:19:12.210 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:12.210 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:12.210 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:12.210 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:19:12.210 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:12.210 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:19:12.210 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:12.210 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:12.210 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:12.210 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:12.210 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:12.210 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:12.210 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:12.210 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:12.210 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:12.210 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:12.210 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:12.210 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:12.211 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:12.211 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:12.211 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:12.211 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:12.470 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:12.470 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.470 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:12.470 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.470 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:19:12.470 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:12.470 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:12.470 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:19:12.470 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:12.470 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:12.470 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:12.470 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:12.470 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:19:12.470 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:19:12.470 17:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:13.038 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:13.297 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:13.297 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:13.297 ************************************ 00:19:13.297 END TEST nvmf_identify_kernel_target 00:19:13.297 ************************************ 00:19:13.297 00:19:13.297 real 0m3.268s 00:19:13.297 user 0m1.179s 00:19:13.297 sys 0m1.471s 00:19:13.297 17:18:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:13.297 17:18:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.297 17:18:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:13.297 17:18:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:13.297 17:18:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.297 17:18:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.297 ************************************ 00:19:13.297 START TEST nvmf_auth_host 00:19:13.297 ************************************ 00:19:13.297 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:13.556 * Looking for test storage... 00:19:13.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:13.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.557 --rc genhtml_branch_coverage=1 00:19:13.557 --rc genhtml_function_coverage=1 00:19:13.557 --rc genhtml_legend=1 00:19:13.557 --rc geninfo_all_blocks=1 00:19:13.557 --rc geninfo_unexecuted_blocks=1 00:19:13.557 00:19:13.557 ' 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:13.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.557 --rc genhtml_branch_coverage=1 00:19:13.557 --rc genhtml_function_coverage=1 00:19:13.557 --rc genhtml_legend=1 00:19:13.557 --rc geninfo_all_blocks=1 00:19:13.557 --rc geninfo_unexecuted_blocks=1 00:19:13.557 00:19:13.557 ' 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:13.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.557 --rc genhtml_branch_coverage=1 00:19:13.557 --rc genhtml_function_coverage=1 00:19:13.557 --rc genhtml_legend=1 00:19:13.557 --rc geninfo_all_blocks=1 00:19:13.557 --rc geninfo_unexecuted_blocks=1 00:19:13.557 00:19:13.557 ' 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:13.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.557 --rc genhtml_branch_coverage=1 00:19:13.557 --rc genhtml_function_coverage=1 00:19:13.557 --rc genhtml_legend=1 00:19:13.557 --rc geninfo_all_blocks=1 00:19:13.557 --rc geninfo_unexecuted_blocks=1 00:19:13.557 00:19:13.557 ' 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:13.557 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:13.557 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:13.558 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:19:13.558 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:19:13.558 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:19:13.558 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:13.558 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:13.558 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:13.558 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:13.558 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:13.558 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.558 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:13.558 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.558 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:13.558 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:13.558 17:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:13.558 Cannot find device "nvmf_init_br" 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:13.558 Cannot find device "nvmf_init_br2" 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:13.558 Cannot find device "nvmf_tgt_br" 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:13.558 Cannot find device "nvmf_tgt_br2" 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:13.558 Cannot find device "nvmf_init_br" 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:13.558 Cannot find device "nvmf_init_br2" 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:19:13.558 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:13.817 Cannot find device "nvmf_tgt_br" 00:19:13.817 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:19:13.817 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:13.817 Cannot find device "nvmf_tgt_br2" 00:19:13.817 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:19:13.817 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:13.817 Cannot find device "nvmf_br" 00:19:13.817 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:19:13.817 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:13.817 Cannot find device "nvmf_init_if" 00:19:13.817 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:19:13.817 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:13.817 Cannot find device "nvmf_init_if2" 00:19:13.817 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:19:13.817 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:13.817 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:13.817 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:19:13.817 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:13.817 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:13.818 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:19:13.818 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:13.818 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:13.818 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:13.818 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:13.818 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:13.818 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:13.818 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:13.818 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:13.818 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:13.818 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:13.818 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:13.818 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:13.818 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:13.818 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:13.818 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:13.818 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:13.818 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:13.818 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:13.818 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:13.818 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:13.818 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:13.818 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:13.818 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:13.818 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:14.077 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:14.078 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:14.078 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:19:14.078 00:19:14.078 --- 10.0.0.3 ping statistics --- 00:19:14.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.078 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:14.078 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:14.078 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:19:14.078 00:19:14.078 --- 10.0.0.4 ping statistics --- 00:19:14.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.078 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:14.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:14.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:19:14.078 00:19:14.078 --- 10.0.0.1 ping statistics --- 00:19:14.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.078 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:14.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:14.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:19:14.078 00:19:14.078 --- 10.0.0.2 ping statistics --- 00:19:14.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.078 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=92491 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 92491 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 92491 ']' 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:14.078 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.646 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:14.646 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:19:14.646 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:14.646 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:14.646 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.646 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.646 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:14.646 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:19:14.646 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:14.646 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:14.646 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:14.646 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:14.646 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:14.646 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:14.646 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=331b7ec69c2e84ff8a9a0750c04fd7d1 00:19:14.646 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:14.646 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Tkr 00:19:14.646 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 331b7ec69c2e84ff8a9a0750c04fd7d1 0 00:19:14.646 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 331b7ec69c2e84ff8a9a0750c04fd7d1 0 00:19:14.646 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:14.646 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:14.646 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=331b7ec69c2e84ff8a9a0750c04fd7d1 00:19:14.646 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:14.646 17:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Tkr 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Tkr 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Tkr 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0195ddd52413cad176137ad50ef2ed7b8296946728edbd84334ab3d6d090a112 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.2m4 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0195ddd52413cad176137ad50ef2ed7b8296946728edbd84334ab3d6d090a112 3 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0195ddd52413cad176137ad50ef2ed7b8296946728edbd84334ab3d6d090a112 3 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0195ddd52413cad176137ad50ef2ed7b8296946728edbd84334ab3d6d090a112 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.2m4 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.2m4 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.2m4 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0c9848a0f3410c0ea0e01986364b7843815f6324ae96e7d0 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Un1 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0c9848a0f3410c0ea0e01986364b7843815f6324ae96e7d0 0 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0c9848a0f3410c0ea0e01986364b7843815f6324ae96e7d0 0 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0c9848a0f3410c0ea0e01986364b7843815f6324ae96e7d0 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Un1 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Un1 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Un1 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6bb0e6013e5444175990f7c9007cb4d9960a743293178de7 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.9gE 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6bb0e6013e5444175990f7c9007cb4d9960a743293178de7 2 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6bb0e6013e5444175990f7c9007cb4d9960a743293178de7 2 00:19:14.646 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:14.647 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:14.647 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6bb0e6013e5444175990f7c9007cb4d9960a743293178de7 00:19:14.647 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:19:14.647 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.9gE 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.9gE 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.9gE 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f27d64cbad70f22cc1d423e2aef1fbcd 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ccK 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f27d64cbad70f22cc1d423e2aef1fbcd 1 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f27d64cbad70f22cc1d423e2aef1fbcd 1 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f27d64cbad70f22cc1d423e2aef1fbcd 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ccK 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ccK 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.ccK 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:14.906 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=afe2ab04b84f744b94e12c4fcb48e881 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.WRg 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key afe2ab04b84f744b94e12c4fcb48e881 1 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 afe2ab04b84f744b94e12c4fcb48e881 1 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=afe2ab04b84f744b94e12c4fcb48e881 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.WRg 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.WRg 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.WRg 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6eb3eecfa5aa81f35372abcb70c69330a6038c7dc13cc841 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ODc 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6eb3eecfa5aa81f35372abcb70c69330a6038c7dc13cc841 2 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6eb3eecfa5aa81f35372abcb70c69330a6038c7dc13cc841 2 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6eb3eecfa5aa81f35372abcb70c69330a6038c7dc13cc841 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ODc 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ODc 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ODc 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=aaa2820a384db8d2554d550f3d85dc24 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.zhA 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key aaa2820a384db8d2554d550f3d85dc24 0 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 aaa2820a384db8d2554d550f3d85dc24 0 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=aaa2820a384db8d2554d550f3d85dc24 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:14.907 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:15.165 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.zhA 00:19:15.165 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.zhA 00:19:15.165 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.zhA 00:19:15.165 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:19:15.165 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:15.165 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:15.165 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:15.165 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:19:15.165 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:19:15.165 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:15.165 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=49503c6d8d9bc77492f1d7b2ff689d1b4bc4f318609bcf04e0e8fbbff1b75c0d 00:19:15.165 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:15.165 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Vl5 00:19:15.165 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 49503c6d8d9bc77492f1d7b2ff689d1b4bc4f318609bcf04e0e8fbbff1b75c0d 3 00:19:15.165 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 49503c6d8d9bc77492f1d7b2ff689d1b4bc4f318609bcf04e0e8fbbff1b75c0d 3 00:19:15.165 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:15.165 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:15.165 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=49503c6d8d9bc77492f1d7b2ff689d1b4bc4f318609bcf04e0e8fbbff1b75c0d 00:19:15.165 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:19:15.166 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:15.166 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Vl5 00:19:15.166 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Vl5 00:19:15.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.166 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Vl5 00:19:15.166 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:19:15.166 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 92491 00:19:15.166 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 92491 ']' 00:19:15.166 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.166 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:15.166 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.166 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:15.166 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Tkr 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.2m4 ]] 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2m4 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Un1 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.9gE ]] 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9gE 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ccK 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.WRg ]] 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.WRg 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ODc 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.zhA ]] 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.zhA 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Vl5 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.425 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.684 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.684 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:19:15.684 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:19:15.684 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:19:15.684 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:15.684 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:15.684 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:15.684 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.684 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.684 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:15.684 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:15.684 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:15.684 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:15.684 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:15.684 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:19:15.684 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:19:15.684 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:19:15.684 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:15.684 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:15.684 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:15.684 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:19:15.684 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:19:15.684 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:19:15.684 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:15.684 17:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:15.942 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:15.942 Waiting for block devices as requested 00:19:15.942 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:16.201 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:16.769 No valid GPT data, bailing 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:16.769 No valid GPT data, bailing 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:16.769 No valid GPT data, bailing 00:19:16.769 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:17.028 No valid GPT data, bailing 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -a 10.0.0.1 -t tcp -s 4420 00:19:17.028 00:19:17.028 Discovery Log Number of Records 2, Generation counter 2 00:19:17.028 =====Discovery Log Entry 0====== 00:19:17.028 trtype: tcp 00:19:17.028 adrfam: ipv4 00:19:17.028 subtype: current discovery subsystem 00:19:17.028 treq: not specified, sq flow control disable supported 00:19:17.028 portid: 1 00:19:17.028 trsvcid: 4420 00:19:17.028 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:17.028 traddr: 10.0.0.1 00:19:17.028 eflags: none 00:19:17.028 sectype: none 00:19:17.028 =====Discovery Log Entry 1====== 00:19:17.028 trtype: tcp 00:19:17.028 adrfam: ipv4 00:19:17.028 subtype: nvme subsystem 00:19:17.028 treq: not specified, sq flow control disable supported 00:19:17.028 portid: 1 00:19:17.028 trsvcid: 4420 00:19:17.028 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:17.028 traddr: 10.0.0.1 00:19:17.028 eflags: none 00:19:17.028 sectype: none 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:17.028 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:17.029 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: ]] 00:19:17.029 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:17.029 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:17.029 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:19:17.029 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:17.029 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:17.029 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:17.029 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:17.029 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:19:17.029 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:17.029 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:17.029 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.029 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:17.029 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.029 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.287 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.287 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.287 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:17.287 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:17.287 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:17.287 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.287 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.287 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:17.287 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.287 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:17.287 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:17.287 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:17.287 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.287 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.287 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.287 nvme0n1 00:19:17.287 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: ]] 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.288 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.555 nvme0n1 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: ]] 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.555 17:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.555 nvme0n1 00:19:17.555 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.555 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.555 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.555 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:17.555 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.555 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: ]] 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.841 nvme0n1 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:17.841 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: ]] 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.842 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.156 nvme0n1 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.156 nvme0n1 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.156 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.414 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.414 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.414 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.414 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.414 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.414 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:18.414 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:18.414 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:18.414 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:18.414 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:18.414 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:18.414 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:18.414 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:18.415 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:18.415 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:18.415 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:18.673 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:18.673 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: ]] 00:19:18.673 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:18.673 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:19:18.673 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:18.673 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:18.673 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:18.673 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:18.673 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:18.673 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:18.673 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.673 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.673 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.673 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:18.673 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:18.673 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:18.673 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:18.673 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.673 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.673 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:18.673 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.673 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:18.673 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:18.673 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:18.673 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.673 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.673 17:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.673 nvme0n1 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: ]] 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:18.673 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.933 nvme0n1 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: ]] 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.933 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.192 nvme0n1 00:19:19.192 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.192 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.192 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.192 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:19.192 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: ]] 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.193 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.452 nvme0n1 00:19:19.452 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.452 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.452 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.452 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:19.452 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.452 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.452 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.452 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.452 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.452 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.452 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.452 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:19.452 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:19.452 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:19.452 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:19.452 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:19.452 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.453 nvme0n1 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:19.453 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.710 17:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.710 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.710 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.710 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.710 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.710 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.710 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.710 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:19.710 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:19.710 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:19.710 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:19.710 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:19.710 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:19.710 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:19.710 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:19.710 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:19.710 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:20.276 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:20.276 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: ]] 00:19:20.276 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:20.276 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:19:20.277 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:20.277 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:20.277 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:20.277 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:20.277 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:20.277 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:20.277 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.277 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.277 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.277 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:20.277 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:20.277 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:20.277 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:20.277 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.277 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.277 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:20.277 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.277 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:20.277 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:20.277 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:20.277 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.277 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.277 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.535 nvme0n1 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: ]] 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.535 17:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.794 nvme0n1 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: ]] 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.794 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.053 nvme0n1 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: ]] 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.053 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.312 nvme0n1 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.312 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.571 nvme0n1 00:19:21.571 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.571 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.571 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:21.571 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.571 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.571 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.571 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.571 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.571 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.571 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.571 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.571 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:21.571 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:21.571 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:21.571 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:21.571 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:21.571 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:21.571 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:21.571 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:21.571 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:21.571 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:21.571 17:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:23.474 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:23.474 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: ]] 00:19:23.474 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:23.474 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:19:23.474 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:23.474 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:23.474 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:23.474 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:23.474 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:23.474 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:23.474 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.474 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.475 nvme0n1 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: ]] 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.475 17:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.042 nvme0n1 00:19:24.042 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.042 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.042 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:24.042 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.042 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.042 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.042 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.042 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.042 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.042 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.042 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.042 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:24.042 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:19:24.042 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:24.042 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:24.042 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:24.042 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:24.042 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: ]] 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.043 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.302 nvme0n1 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: ]] 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.302 17:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.561 nvme0n1 00:19:24.561 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.561 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.561 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.561 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.561 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:24.561 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.819 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.820 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.079 nvme0n1 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: ]] 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.079 17:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.647 nvme0n1 00:19:25.647 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.647 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.647 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.647 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:25.647 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.647 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.905 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.905 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.905 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.905 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.905 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.905 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:25.905 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:25.905 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:25.905 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:25.905 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:25.905 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:25.905 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:25.905 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:25.905 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:25.905 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:25.905 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:25.905 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: ]] 00:19:25.905 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:25.905 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:19:25.905 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:25.905 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:25.906 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:25.906 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:25.906 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:25.906 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:25.906 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.906 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.906 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.906 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:25.906 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:25.906 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:25.906 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:25.906 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.906 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.906 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:25.906 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.906 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:25.906 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:25.906 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:25.906 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.906 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.906 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.472 nvme0n1 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: ]] 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:26.472 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:26.473 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:26.473 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:26.473 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.473 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.473 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:26.473 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:26.473 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:26.473 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:26.473 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:26.473 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.473 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.473 17:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.040 nvme0n1 00:19:27.040 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.040 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.040 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:27.040 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.040 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.040 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.299 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.299 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.299 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.299 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.299 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.299 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:27.299 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:19:27.299 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.299 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:27.299 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:27.299 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:27.299 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:27.299 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:27.299 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:27.299 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:27.300 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:27.300 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: ]] 00:19:27.300 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:27.300 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:19:27.300 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:27.300 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:27.300 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:27.300 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:27.300 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:27.300 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:27.300 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.300 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.300 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.300 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:27.300 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:27.300 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:27.300 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:27.300 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.300 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.300 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:27.300 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.300 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:27.300 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:27.300 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:27.300 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:27.300 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.300 17:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.867 nvme0n1 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.867 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.434 nvme0n1 00:19:28.434 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.434 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.434 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:28.434 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.434 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.434 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.434 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.434 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.434 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.434 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.434 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.434 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:28.434 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:28.434 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:28.434 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:19:28.434 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: ]] 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.435 nvme0n1 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.435 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.694 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.694 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.694 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.694 17:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: ]] 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:28.694 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.695 nvme0n1 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: ]] 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.695 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.954 nvme0n1 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: ]] 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.954 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.214 nvme0n1 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.214 nvme0n1 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: ]] 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.214 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.474 nvme0n1 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: ]] 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.474 17:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.734 nvme0n1 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: ]] 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.734 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.993 nvme0n1 00:19:29.993 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.993 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:29.993 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.993 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.993 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:29.993 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.993 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.993 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:29.993 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.993 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.993 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.993 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:29.993 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:19:29.993 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:29.993 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:29.993 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:29.993 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:29.993 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: ]] 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.994 nvme0n1 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.994 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.266 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.266 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.266 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.266 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.266 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.266 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:30.266 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:19:30.266 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:30.266 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:30.266 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:30.266 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:30.266 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:30.266 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:30.266 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:30.266 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:30.266 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:30.266 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:30.266 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.267 nvme0n1 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: ]] 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.267 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.550 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.550 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:30.550 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:30.550 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:30.550 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:30.550 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.551 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.551 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:30.551 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.551 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:30.551 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:30.551 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:30.551 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.551 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.551 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.551 nvme0n1 00:19:30.551 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.551 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.551 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:30.551 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.551 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.551 17:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: ]] 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.551 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.810 nvme0n1 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: ]] 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:30.810 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.811 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.811 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.069 nvme0n1 00:19:31.069 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.069 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:31.069 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:31.069 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.069 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: ]] 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.070 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.329 nvme0n1 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.329 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.589 nvme0n1 00:19:31.589 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.589 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:31.589 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.589 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.589 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:31.589 17:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: ]] 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.589 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.849 nvme0n1 00:19:31.849 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.849 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:31.849 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.849 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:31.849 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.849 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: ]] 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.108 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.368 nvme0n1 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: ]] 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.368 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:32.369 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:32.369 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:32.369 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:32.369 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:32.369 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:32.369 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:32.369 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:32.369 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:32.369 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:32.369 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:32.369 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.369 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.369 17:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.627 nvme0n1 00:19:32.627 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.627 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:32.627 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:32.627 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.627 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.627 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: ]] 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.887 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.146 nvme0n1 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.146 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.405 nvme0n1 00:19:33.405 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.405 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.405 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.405 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.405 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:33.405 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: ]] 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.665 17:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.233 nvme0n1 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: ]] 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.233 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.234 17:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.801 nvme0n1 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: ]] 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.801 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.369 nvme0n1 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: ]] 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.369 17:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.937 nvme0n1 00:19:35.937 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.937 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.937 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:35.937 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.937 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.937 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.937 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.937 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.937 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.937 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.937 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.937 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:35.937 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:19:35.937 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:35.937 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:35.937 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:35.937 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:35.937 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:35.937 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:35.937 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:35.937 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:35.938 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:35.938 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:35.938 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:19:35.938 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:35.938 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:35.938 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:35.938 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:35.938 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:35.938 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:35.938 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.938 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.938 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.938 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:35.938 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:35.938 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:35.938 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:35.938 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.938 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.938 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:35.938 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:35.938 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:35.938 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:35.938 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:35.938 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:35.938 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.938 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.505 nvme0n1 00:19:36.505 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.505 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.505 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.505 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.505 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:36.505 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.505 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.505 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: ]] 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.506 nvme0n1 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.506 17:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: ]] 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:36.506 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:36.507 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:36.507 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.507 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.507 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.765 nvme0n1 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: ]] 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.765 nvme0n1 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.765 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.025 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.025 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.025 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.025 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.025 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.025 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: ]] 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.026 nvme0n1 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.026 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.285 nvme0n1 00:19:37.285 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.285 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.285 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.285 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.285 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.285 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.285 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: ]] 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.286 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.545 nvme0n1 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: ]] 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.546 17:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.546 nvme0n1 00:19:37.546 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.546 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.546 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.546 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.546 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.546 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: ]] 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.806 nvme0n1 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: ]] 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.806 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.066 nvme0n1 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.066 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.326 nvme0n1 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: ]] 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.326 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.585 nvme0n1 00:19:38.585 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.585 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.585 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.585 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:38.585 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.585 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.585 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.585 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.585 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.585 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.585 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: ]] 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.586 17:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.845 nvme0n1 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: ]] 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.845 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.104 nvme0n1 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: ]] 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.104 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.363 nvme0n1 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.363 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.622 nvme0n1 00:19:39.622 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.622 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.622 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:39.622 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.622 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.622 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.622 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.622 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.622 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.622 17:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: ]] 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.622 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.882 nvme0n1 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: ]] 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.882 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.142 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:40.142 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:40.142 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:40.142 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:40.142 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.142 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.142 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:40.142 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.142 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:40.142 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:40.142 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:40.142 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.142 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.142 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.401 nvme0n1 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: ]] 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.401 17:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.660 nvme0n1 00:19:40.660 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.660 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.660 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:40.660 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.660 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.660 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.660 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.660 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.660 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.660 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.660 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.660 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:40.660 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:19:40.660 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.660 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:40.660 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:40.660 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:40.660 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:40.660 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:40.660 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:40.660 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:40.660 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:40.660 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: ]] 00:19:40.661 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:40.661 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:19:40.661 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:40.661 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:40.661 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:40.661 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:40.661 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:40.661 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:40.661 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.661 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.919 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.919 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:40.919 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:40.919 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:40.919 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:40.919 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.919 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.919 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:40.919 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.919 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:40.919 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:40.919 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:40.919 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:40.919 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.919 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.178 nvme0n1 00:19:41.178 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.178 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:41.178 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.178 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.178 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.178 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.178 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.178 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:41.178 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.178 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.178 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.178 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:41.178 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:19:41.178 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:41.178 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:41.178 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:41.178 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:41.178 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:41.178 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:41.178 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:41.178 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:41.179 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:41.179 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:41.179 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:19:41.179 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:41.179 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:41.179 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:41.179 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:41.179 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:41.179 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:41.179 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.179 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.179 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.179 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:41.179 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:41.179 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:41.179 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:41.179 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.179 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.179 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:41.179 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:41.179 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:41.179 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:41.179 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:41.179 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:41.179 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.179 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.438 nvme0n1 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxYjdlYzY5YzJlODRmZjhhOWEwNzUwYzA0ZmQ3ZDHYVLeg: 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: ]] 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE5NWRkZDUyNDEzY2FkMTc2MTM3YWQ1MGVmMmVkN2I4Mjk2OTQ2NzI4ZWRiZDg0MzM0YWIzZDZkMDkwYTExMhL62kk=: 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.438 17:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.007 nvme0n1 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: ]] 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.007 17:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.574 nvme0n1 00:19:42.574 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.574 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.574 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.574 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:42.574 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.574 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.574 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: ]] 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:42.575 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:42.833 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.834 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.834 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.401 nvme0n1 00:19:43.401 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmViM2VlY2ZhNWFhODFmMzUzNzJhYmNiNzBjNjkzMzBhNjAzOGM3ZGMxM2NjODQxh9JZOA==: 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: ]] 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWFhMjgyMGEzODRkYjhkMjU1NGQ1NTBmM2Q4NWRjMjT1ZFU1: 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.402 17:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.971 nvme0n1 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDk1MDNjNmQ4ZDliYzc3NDkyZjFkN2IyZmY2ODlkMWI0YmM0ZjMxODYwOWJjZjA0ZTBlOGZiYmZmMWI3NWMwZEzWhTc=: 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.971 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.539 nvme0n1 00:19:44.539 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.539 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.539 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:44.539 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.539 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.539 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.539 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.539 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: ]] 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.540 2024/11/19 17:18:36 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:44.540 request: 00:19:44.540 { 00:19:44.540 "method": "bdev_nvme_attach_controller", 00:19:44.540 "params": { 00:19:44.540 "name": "nvme0", 00:19:44.540 "trtype": "tcp", 00:19:44.540 "traddr": "10.0.0.1", 00:19:44.540 "adrfam": "ipv4", 00:19:44.540 "trsvcid": "4420", 00:19:44.540 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:44.540 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:44.540 "prchk_reftag": false, 00:19:44.540 "prchk_guard": false, 00:19:44.540 "hdgst": false, 00:19:44.540 "ddgst": false, 00:19:44.540 "allow_unrecognized_csi": false 00:19:44.540 } 00:19:44.540 } 00:19:44.540 Got JSON-RPC error response 00:19:44.540 GoRPCClient: error on JSON-RPC call 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.540 17:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.540 2024/11/19 17:18:37 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:44.540 request: 00:19:44.540 { 00:19:44.540 "method": "bdev_nvme_attach_controller", 00:19:44.540 "params": { 00:19:44.540 "name": "nvme0", 00:19:44.540 "trtype": "tcp", 00:19:44.540 "traddr": "10.0.0.1", 00:19:44.540 "adrfam": "ipv4", 00:19:44.540 "trsvcid": "4420", 00:19:44.540 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:44.540 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:44.540 "prchk_reftag": false, 00:19:44.540 "prchk_guard": false, 00:19:44.540 "hdgst": false, 00:19:44.540 "ddgst": false, 00:19:44.540 "dhchap_key": "key2", 00:19:44.540 "allow_unrecognized_csi": false 00:19:44.540 } 00:19:44.540 } 00:19:44.540 Got JSON-RPC error response 00:19:44.540 GoRPCClient: error on JSON-RPC call 00:19:44.540 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:44.540 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:44.540 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:44.540 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:44.540 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:44.540 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.540 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:19:44.540 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.540 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.540 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.802 2024/11/19 17:18:37 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:44.802 request: 00:19:44.802 { 00:19:44.802 "method": "bdev_nvme_attach_controller", 00:19:44.802 "params": { 00:19:44.802 "name": "nvme0", 00:19:44.802 "trtype": "tcp", 00:19:44.802 "traddr": "10.0.0.1", 00:19:44.802 "adrfam": "ipv4", 00:19:44.802 "trsvcid": "4420", 00:19:44.802 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:44.802 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:44.802 "prchk_reftag": false, 00:19:44.802 "prchk_guard": false, 00:19:44.802 "hdgst": false, 00:19:44.802 "ddgst": false, 00:19:44.802 "dhchap_key": "key1", 00:19:44.802 "dhchap_ctrlr_key": "ckey2", 00:19:44.802 "allow_unrecognized_csi": false 00:19:44.802 } 00:19:44.802 } 00:19:44.802 Got JSON-RPC error response 00:19:44.802 GoRPCClient: error on JSON-RPC call 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:19:44.802 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.803 nvme0n1 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: ]] 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.803 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.073 2024/11/19 17:18:37 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey2 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:19:45.073 request: 00:19:45.073 { 00:19:45.073 "method": "bdev_nvme_set_keys", 00:19:45.073 "params": { 00:19:45.073 "name": "nvme0", 00:19:45.073 "dhchap_key": "key1", 00:19:45.073 "dhchap_ctrlr_key": "ckey2" 00:19:45.073 } 00:19:45.073 } 00:19:45.073 Got JSON-RPC error response 00:19:45.073 GoRPCClient: error on JSON-RPC call 00:19:45.073 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:45.073 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:45.073 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:45.073 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:45.073 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:45.073 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.073 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:19:45.073 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.073 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.073 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.073 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:19:45.073 17:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5ODQ4YTBmMzQxMGMwZWEwZTAxOTg2MzY0Yjc4NDM4MTVmNjMyNGFlOTZlN2Qw7BVQMg==: 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: ]] 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJiMGU2MDEzZTU0NDQxNzU5OTBmN2M5MDA3Y2I0ZDk5NjBhNzQzMjkzMTc4ZGU3MLMmaw==: 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.023 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.282 nvme0n1 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjI3ZDY0Y2JhZDcwZjIyY2MxZDQyM2UyYWVmMWZiY2RyuSRU: 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: ]] 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWZlMmFiMDRiODRmNzQ0Yjk0ZTEyYzRmY2I0OGU4ODHx7ead: 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.282 2024/11/19 17:18:38 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey1 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:19:46.282 request: 00:19:46.282 { 00:19:46.282 "method": "bdev_nvme_set_keys", 00:19:46.282 "params": { 00:19:46.282 "name": "nvme0", 00:19:46.282 "dhchap_key": "key2", 00:19:46.282 "dhchap_ctrlr_key": "ckey1" 00:19:46.282 } 00:19:46.282 } 00:19:46.282 Got JSON-RPC error response 00:19:46.282 GoRPCClient: error on JSON-RPC call 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:19:46.282 17:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:19:47.218 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:19:47.218 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:19:47.218 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.218 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.218 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.218 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:19:47.218 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:19:47.218 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:19:47.218 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:19:47.218 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:47.218 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:19:47.218 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:47.218 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:19:47.218 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:47.218 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:47.218 rmmod nvme_tcp 00:19:47.477 rmmod nvme_fabrics 00:19:47.477 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:47.477 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:19:47.477 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:19:47.477 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 92491 ']' 00:19:47.477 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 92491 00:19:47.477 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 92491 ']' 00:19:47.477 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 92491 00:19:47.477 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:19:47.477 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:47.477 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92491 00:19:47.477 killing process with pid 92491 00:19:47.477 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:47.477 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:47.477 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92491' 00:19:47.477 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 92491 00:19:47.477 17:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 92491 00:19:47.737 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:47.737 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:47.737 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:47.737 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:19:47.737 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:19:47.737 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:19:47.737 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:47.737 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:47.737 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:47.737 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:47.737 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:47.737 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:47.737 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:47.737 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:47.737 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:47.737 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:47.737 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:47.737 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:47.737 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:47.737 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:47.737 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:47.737 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:47.737 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:47.737 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.737 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:47.737 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.996 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:19:47.996 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:47.996 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:47.996 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:19:47.996 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:19:47.996 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:19:47.996 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:47.996 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:47.996 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:47.996 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:47.996 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:19:47.996 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:19:47.996 17:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:48.564 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:48.823 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:48.823 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:48.823 17:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Tkr /tmp/spdk.key-null.Un1 /tmp/spdk.key-sha256.ccK /tmp/spdk.key-sha384.ODc /tmp/spdk.key-sha512.Vl5 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:19:48.823 17:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:49.083 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:49.342 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:49.342 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:49.342 00:19:49.342 real 0m35.910s 00:19:49.342 user 0m32.916s 00:19:49.342 sys 0m4.011s 00:19:49.342 17:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:49.342 17:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.342 ************************************ 00:19:49.342 END TEST nvmf_auth_host 00:19:49.342 ************************************ 00:19:49.342 17:18:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:19:49.342 17:18:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:49.342 17:18:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:49.342 17:18:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:49.342 17:18:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.342 ************************************ 00:19:49.342 START TEST nvmf_digest 00:19:49.342 ************************************ 00:19:49.342 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:49.342 * Looking for test storage... 00:19:49.342 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:49.342 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:49.342 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:19:49.342 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:49.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.602 --rc genhtml_branch_coverage=1 00:19:49.602 --rc genhtml_function_coverage=1 00:19:49.602 --rc genhtml_legend=1 00:19:49.602 --rc geninfo_all_blocks=1 00:19:49.602 --rc geninfo_unexecuted_blocks=1 00:19:49.602 00:19:49.602 ' 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:49.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.602 --rc genhtml_branch_coverage=1 00:19:49.602 --rc genhtml_function_coverage=1 00:19:49.602 --rc genhtml_legend=1 00:19:49.602 --rc geninfo_all_blocks=1 00:19:49.602 --rc geninfo_unexecuted_blocks=1 00:19:49.602 00:19:49.602 ' 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:49.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.602 --rc genhtml_branch_coverage=1 00:19:49.602 --rc genhtml_function_coverage=1 00:19:49.602 --rc genhtml_legend=1 00:19:49.602 --rc geninfo_all_blocks=1 00:19:49.602 --rc geninfo_unexecuted_blocks=1 00:19:49.602 00:19:49.602 ' 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:49.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.602 --rc genhtml_branch_coverage=1 00:19:49.602 --rc genhtml_function_coverage=1 00:19:49.602 --rc genhtml_legend=1 00:19:49.602 --rc geninfo_all_blocks=1 00:19:49.602 --rc geninfo_unexecuted_blocks=1 00:19:49.602 00:19:49.602 ' 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:49.602 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:49.603 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:49.603 Cannot find device "nvmf_init_br" 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:49.603 Cannot find device "nvmf_init_br2" 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:49.603 Cannot find device "nvmf_tgt_br" 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:49.603 Cannot find device "nvmf_tgt_br2" 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:19:49.603 17:18:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:49.603 Cannot find device "nvmf_init_br" 00:19:49.603 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:19:49.603 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:49.603 Cannot find device "nvmf_init_br2" 00:19:49.603 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:19:49.603 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:49.603 Cannot find device "nvmf_tgt_br" 00:19:49.603 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:19:49.603 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:49.603 Cannot find device "nvmf_tgt_br2" 00:19:49.603 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:19:49.603 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:49.603 Cannot find device "nvmf_br" 00:19:49.603 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:19:49.603 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:49.603 Cannot find device "nvmf_init_if" 00:19:49.603 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:19:49.603 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:49.603 Cannot find device "nvmf_init_if2" 00:19:49.603 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:19:49.603 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:49.603 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:49.603 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:19:49.603 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:49.603 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:49.603 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:19:49.603 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:49.604 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:49.604 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:49.604 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:49.604 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:49.604 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:49.863 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:49.863 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:19:49.863 00:19:49.863 --- 10.0.0.3 ping statistics --- 00:19:49.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.863 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:49.863 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:49.863 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:19:49.863 00:19:49.863 --- 10.0.0.4 ping statistics --- 00:19:49.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.863 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:49.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:49.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:19:49.863 00:19:49.863 --- 10.0.0.1 ping statistics --- 00:19:49.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.863 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:49.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:49.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:19:49.863 00:19:49.863 --- 10.0.0.2 ping statistics --- 00:19:49.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.863 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:49.863 ************************************ 00:19:49.863 START TEST nvmf_digest_clean 00:19:49.863 ************************************ 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=94137 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 94137 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94137 ']' 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.863 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:50.122 [2024-11-19 17:18:42.433487] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:19:50.122 [2024-11-19 17:18:42.433606] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.122 [2024-11-19 17:18:42.590224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.122 [2024-11-19 17:18:42.647528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.122 [2024-11-19 17:18:42.647610] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.122 [2024-11-19 17:18:42.647624] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:50.122 [2024-11-19 17:18:42.647634] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:50.122 [2024-11-19 17:18:42.647644] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.122 [2024-11-19 17:18:42.648184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.382 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.382 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:19:50.382 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:50.382 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:50.382 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:50.382 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.382 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:19:50.382 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:19:50.382 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:19:50.382 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.382 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:50.382 null0 00:19:50.382 [2024-11-19 17:18:42.899268] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:50.643 [2024-11-19 17:18:42.923487] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:50.643 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.643 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:19:50.643 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:50.643 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:50.643 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:50.643 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:50.643 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:50.643 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:50.643 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94169 00:19:50.643 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:50.643 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94169 /var/tmp/bperf.sock 00:19:50.643 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94169 ']' 00:19:50.643 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:50.643 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:50.643 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:50.643 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.643 17:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:50.643 [2024-11-19 17:18:42.995482] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:19:50.643 [2024-11-19 17:18:42.995584] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94169 ] 00:19:50.643 [2024-11-19 17:18:43.149590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.902 [2024-11-19 17:18:43.216834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.902 17:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.902 17:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:19:50.902 17:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:50.902 17:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:50.902 17:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:51.161 17:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:51.161 17:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:51.729 nvme0n1 00:19:51.729 17:18:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:51.729 17:18:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:51.729 Running I/O for 2 seconds... 00:19:54.046 21368.00 IOPS, 83.47 MiB/s [2024-11-19T17:18:46.579Z] 21453.00 IOPS, 83.80 MiB/s 00:19:54.046 Latency(us) 00:19:54.046 [2024-11-19T17:18:46.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.046 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:54.046 nvme0n1 : 2.01 21463.27 83.84 0.00 0.00 5956.46 2964.01 11617.75 00:19:54.046 [2024-11-19T17:18:46.579Z] =================================================================================================================== 00:19:54.046 [2024-11-19T17:18:46.579Z] Total : 21463.27 83.84 0.00 0.00 5956.46 2964.01 11617.75 00:19:54.046 { 00:19:54.046 "results": [ 00:19:54.046 { 00:19:54.046 "job": "nvme0n1", 00:19:54.046 "core_mask": "0x2", 00:19:54.046 "workload": "randread", 00:19:54.046 "status": "finished", 00:19:54.046 "queue_depth": 128, 00:19:54.046 "io_size": 4096, 00:19:54.046 "runtime": 2.005007, 00:19:54.046 "iops": 21463.26671178704, 00:19:54.046 "mibps": 83.84088559291813, 00:19:54.046 "io_failed": 0, 00:19:54.046 "io_timeout": 0, 00:19:54.046 "avg_latency_us": 5956.462464605153, 00:19:54.046 "min_latency_us": 2964.0145454545454, 00:19:54.046 "max_latency_us": 11617.745454545455 00:19:54.046 } 00:19:54.046 ], 00:19:54.046 "core_count": 1 00:19:54.046 } 00:19:54.046 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:54.046 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:54.046 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:54.046 | select(.opcode=="crc32c") 00:19:54.046 | "\(.module_name) \(.executed)"' 00:19:54.046 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:54.046 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:54.046 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:54.046 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:54.046 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:54.046 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:54.046 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94169 00:19:54.046 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94169 ']' 00:19:54.046 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94169 00:19:54.046 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:54.046 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:54.046 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94169 00:19:54.046 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:54.046 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:54.046 killing process with pid 94169 00:19:54.046 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94169' 00:19:54.046 Received shutdown signal, test time was about 2.000000 seconds 00:19:54.046 00:19:54.046 Latency(us) 00:19:54.046 [2024-11-19T17:18:46.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.046 [2024-11-19T17:18:46.579Z] =================================================================================================================== 00:19:54.046 [2024-11-19T17:18:46.579Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:54.047 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94169 00:19:54.047 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94169 00:19:54.305 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:19:54.305 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:54.305 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:54.305 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:54.305 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:54.305 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:54.305 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:54.305 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94246 00:19:54.305 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:54.305 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94246 /var/tmp/bperf.sock 00:19:54.305 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94246 ']' 00:19:54.305 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:54.305 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:54.305 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:54.306 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.306 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:54.306 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:54.306 Zero copy mechanism will not be used. 00:19:54.306 [2024-11-19 17:18:46.741255] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:19:54.306 [2024-11-19 17:18:46.741347] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94246 ] 00:19:54.565 [2024-11-19 17:18:46.883654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.565 [2024-11-19 17:18:46.941706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.565 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:54.565 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:19:54.565 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:54.565 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:54.565 17:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:55.133 17:18:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:55.133 17:18:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:55.391 nvme0n1 00:19:55.391 17:18:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:55.391 17:18:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:55.391 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:55.391 Zero copy mechanism will not be used. 00:19:55.391 Running I/O for 2 seconds... 00:19:57.339 7481.00 IOPS, 935.12 MiB/s [2024-11-19T17:18:49.872Z] 7480.00 IOPS, 935.00 MiB/s 00:19:57.339 Latency(us) 00:19:57.339 [2024-11-19T17:18:49.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.339 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:57.339 nvme0n1 : 2.00 7477.89 934.74 0.00 0.00 2135.83 558.55 10128.29 00:19:57.339 [2024-11-19T17:18:49.872Z] =================================================================================================================== 00:19:57.339 [2024-11-19T17:18:49.872Z] Total : 7477.89 934.74 0.00 0.00 2135.83 558.55 10128.29 00:19:57.339 { 00:19:57.339 "results": [ 00:19:57.339 { 00:19:57.339 "job": "nvme0n1", 00:19:57.339 "core_mask": "0x2", 00:19:57.339 "workload": "randread", 00:19:57.339 "status": "finished", 00:19:57.339 "queue_depth": 16, 00:19:57.339 "io_size": 131072, 00:19:57.339 "runtime": 2.002703, 00:19:57.339 "iops": 7477.893626763429, 00:19:57.339 "mibps": 934.7367033454286, 00:19:57.339 "io_failed": 0, 00:19:57.340 "io_timeout": 0, 00:19:57.340 "avg_latency_us": 2135.8308003108004, 00:19:57.340 "min_latency_us": 558.5454545454545, 00:19:57.340 "max_latency_us": 10128.290909090909 00:19:57.340 } 00:19:57.340 ], 00:19:57.340 "core_count": 1 00:19:57.340 } 00:19:57.599 17:18:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:57.599 17:18:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:57.599 17:18:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:57.599 | select(.opcode=="crc32c") 00:19:57.599 | "\(.module_name) \(.executed)"' 00:19:57.599 17:18:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:57.599 17:18:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:57.858 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:57.858 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:57.858 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:57.858 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:57.858 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94246 00:19:57.858 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94246 ']' 00:19:57.858 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94246 00:19:57.858 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:57.858 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:57.858 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94246 00:19:57.858 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:57.858 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:57.858 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94246' 00:19:57.858 killing process with pid 94246 00:19:57.858 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94246 00:19:57.858 Received shutdown signal, test time was about 2.000000 seconds 00:19:57.858 00:19:57.858 Latency(us) 00:19:57.858 [2024-11-19T17:18:50.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.858 [2024-11-19T17:18:50.391Z] =================================================================================================================== 00:19:57.858 [2024-11-19T17:18:50.391Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:57.858 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94246 00:19:58.117 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:19:58.117 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:58.117 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:58.117 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:58.117 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:58.117 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:58.117 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:58.117 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:58.117 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94323 00:19:58.117 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94323 /var/tmp/bperf.sock 00:19:58.117 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94323 ']' 00:19:58.117 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:58.117 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:58.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:58.117 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:58.117 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:58.117 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:58.117 [2024-11-19 17:18:50.526560] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:19:58.117 [2024-11-19 17:18:50.526654] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94323 ] 00:19:58.375 [2024-11-19 17:18:50.672285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.375 [2024-11-19 17:18:50.738017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.376 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:58.376 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:19:58.376 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:58.376 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:58.376 17:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:58.943 17:18:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:58.943 17:18:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:59.201 nvme0n1 00:19:59.201 17:18:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:59.201 17:18:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:59.201 Running I/O for 2 seconds... 00:20:01.165 26144.00 IOPS, 102.12 MiB/s [2024-11-19T17:18:53.698Z] 26456.00 IOPS, 103.34 MiB/s 00:20:01.165 Latency(us) 00:20:01.165 [2024-11-19T17:18:53.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.165 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:01.165 nvme0n1 : 2.01 26439.14 103.28 0.00 0.00 4835.78 2219.29 14179.61 00:20:01.165 [2024-11-19T17:18:53.698Z] =================================================================================================================== 00:20:01.165 [2024-11-19T17:18:53.698Z] Total : 26439.14 103.28 0.00 0.00 4835.78 2219.29 14179.61 00:20:01.424 { 00:20:01.424 "results": [ 00:20:01.424 { 00:20:01.424 "job": "nvme0n1", 00:20:01.424 "core_mask": "0x2", 00:20:01.424 "workload": "randwrite", 00:20:01.424 "status": "finished", 00:20:01.424 "queue_depth": 128, 00:20:01.424 "io_size": 4096, 00:20:01.424 "runtime": 2.007403, 00:20:01.424 "iops": 26439.135539799434, 00:20:01.424 "mibps": 103.27787320234154, 00:20:01.424 "io_failed": 0, 00:20:01.424 "io_timeout": 0, 00:20:01.424 "avg_latency_us": 4835.778373797133, 00:20:01.424 "min_latency_us": 2219.287272727273, 00:20:01.424 "max_latency_us": 14179.607272727273 00:20:01.424 } 00:20:01.424 ], 00:20:01.424 "core_count": 1 00:20:01.424 } 00:20:01.424 17:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:01.424 17:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:01.424 17:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:01.424 17:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:01.424 | select(.opcode=="crc32c") 00:20:01.424 | "\(.module_name) \(.executed)"' 00:20:01.424 17:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:01.684 17:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:01.684 17:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:01.684 17:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:01.684 17:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:01.684 17:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94323 00:20:01.684 17:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94323 ']' 00:20:01.684 17:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94323 00:20:01.684 17:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:01.684 17:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:01.684 17:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94323 00:20:01.684 killing process with pid 94323 00:20:01.684 Received shutdown signal, test time was about 2.000000 seconds 00:20:01.684 00:20:01.684 Latency(us) 00:20:01.684 [2024-11-19T17:18:54.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.684 [2024-11-19T17:18:54.217Z] =================================================================================================================== 00:20:01.684 [2024-11-19T17:18:54.217Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:01.684 17:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:01.684 17:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:01.684 17:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94323' 00:20:01.684 17:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94323 00:20:01.684 17:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94323 00:20:01.684 17:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:20:01.684 17:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:01.684 17:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:01.684 17:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:01.684 17:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:01.684 17:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:01.684 17:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:01.684 17:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94396 00:20:01.684 17:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94396 /var/tmp/bperf.sock 00:20:01.684 17:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:01.684 17:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94396 ']' 00:20:01.684 17:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:01.684 17:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.684 17:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:01.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:01.684 17:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.684 17:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:01.943 [2024-11-19 17:18:54.271933] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:20:01.943 [2024-11-19 17:18:54.272394] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94396 ] 00:20:01.943 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:01.943 Zero copy mechanism will not be used. 00:20:01.943 [2024-11-19 17:18:54.421616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.943 [2024-11-19 17:18:54.464198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.202 17:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.202 17:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:02.202 17:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:02.202 17:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:02.202 17:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:02.460 17:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:02.460 17:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:02.718 nvme0n1 00:20:02.718 17:18:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:02.719 17:18:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:02.719 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:02.719 Zero copy mechanism will not be used. 00:20:02.719 Running I/O for 2 seconds... 00:20:05.031 7053.00 IOPS, 881.62 MiB/s [2024-11-19T17:18:57.564Z] 7271.50 IOPS, 908.94 MiB/s 00:20:05.031 Latency(us) 00:20:05.031 [2024-11-19T17:18:57.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.031 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:05.031 nvme0n1 : 2.00 7267.52 908.44 0.00 0.00 2197.03 1839.48 7923.90 00:20:05.031 [2024-11-19T17:18:57.564Z] =================================================================================================================== 00:20:05.031 [2024-11-19T17:18:57.564Z] Total : 7267.52 908.44 0.00 0.00 2197.03 1839.48 7923.90 00:20:05.031 { 00:20:05.031 "results": [ 00:20:05.031 { 00:20:05.031 "job": "nvme0n1", 00:20:05.031 "core_mask": "0x2", 00:20:05.031 "workload": "randwrite", 00:20:05.031 "status": "finished", 00:20:05.031 "queue_depth": 16, 00:20:05.031 "io_size": 131072, 00:20:05.031 "runtime": 2.003298, 00:20:05.031 "iops": 7267.515866336411, 00:20:05.031 "mibps": 908.4394832920514, 00:20:05.031 "io_failed": 0, 00:20:05.031 "io_timeout": 0, 00:20:05.031 "avg_latency_us": 2197.0331808503333, 00:20:05.031 "min_latency_us": 1839.4763636363637, 00:20:05.031 "max_latency_us": 7923.898181818182 00:20:05.031 } 00:20:05.031 ], 00:20:05.031 "core_count": 1 00:20:05.031 } 00:20:05.031 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:05.031 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:05.031 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:05.031 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:05.031 | select(.opcode=="crc32c") 00:20:05.031 | "\(.module_name) \(.executed)"' 00:20:05.031 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:05.031 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:05.031 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:05.031 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:05.031 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:05.031 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94396 00:20:05.031 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94396 ']' 00:20:05.031 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94396 00:20:05.031 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:05.031 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.031 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94396 00:20:05.031 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:05.031 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:05.031 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94396' 00:20:05.031 killing process with pid 94396 00:20:05.031 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94396 00:20:05.031 Received shutdown signal, test time was about 2.000000 seconds 00:20:05.031 00:20:05.031 Latency(us) 00:20:05.031 [2024-11-19T17:18:57.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.031 [2024-11-19T17:18:57.564Z] =================================================================================================================== 00:20:05.031 [2024-11-19T17:18:57.564Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:05.031 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94396 00:20:05.290 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 94137 00:20:05.290 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94137 ']' 00:20:05.290 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94137 00:20:05.291 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:05.291 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.291 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94137 00:20:05.291 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:05.291 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:05.291 killing process with pid 94137 00:20:05.291 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94137' 00:20:05.291 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94137 00:20:05.291 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94137 00:20:05.549 00:20:05.549 real 0m15.635s 00:20:05.549 user 0m29.081s 00:20:05.549 sys 0m5.064s 00:20:05.549 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:05.549 17:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:05.549 ************************************ 00:20:05.549 END TEST nvmf_digest_clean 00:20:05.549 ************************************ 00:20:05.549 17:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:20:05.549 17:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:05.549 17:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:05.549 17:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:05.549 ************************************ 00:20:05.549 START TEST nvmf_digest_error 00:20:05.549 ************************************ 00:20:05.549 17:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:20:05.549 17:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:20:05.549 17:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:05.549 17:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:05.549 17:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:05.549 17:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=94500 00:20:05.549 17:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 94500 00:20:05.549 17:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:05.549 17:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94500 ']' 00:20:05.549 17:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.549 17:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:05.549 17:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.549 17:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:05.549 17:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:05.808 [2024-11-19 17:18:58.117040] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:20:05.808 [2024-11-19 17:18:58.117135] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:05.808 [2024-11-19 17:18:58.258139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.808 [2024-11-19 17:18:58.301295] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:05.808 [2024-11-19 17:18:58.301353] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:05.808 [2024-11-19 17:18:58.301363] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:05.808 [2024-11-19 17:18:58.301369] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:05.808 [2024-11-19 17:18:58.301376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:05.808 [2024-11-19 17:18:58.301724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.744 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.744 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:06.744 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:06.744 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:06.744 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:06.744 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.744 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:20:06.744 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.744 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:06.744 [2024-11-19 17:18:59.114220] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:20:06.744 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.744 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:20:06.744 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:20:06.744 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.744 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:06.744 null0 00:20:06.744 [2024-11-19 17:18:59.250897] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.744 [2024-11-19 17:18:59.275055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:07.002 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.003 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:20:07.003 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:07.003 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:07.003 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:07.003 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:07.003 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94544 00:20:07.003 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94544 /var/tmp/bperf.sock 00:20:07.003 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:20:07.003 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94544 ']' 00:20:07.003 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:07.003 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.003 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:07.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:07.003 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.003 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:07.003 [2024-11-19 17:18:59.340999] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:20:07.003 [2024-11-19 17:18:59.341091] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94544 ] 00:20:07.003 [2024-11-19 17:18:59.493248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.262 [2024-11-19 17:18:59.546026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.262 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.262 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:07.262 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:07.262 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:07.521 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:07.521 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.521 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:07.521 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.521 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:07.521 17:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:07.779 nvme0n1 00:20:08.037 17:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:08.037 17:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.037 17:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:08.037 17:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.037 17:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:08.037 17:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:08.037 Running I/O for 2 seconds... 00:20:08.037 [2024-11-19 17:19:00.455387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.037 [2024-11-19 17:19:00.455434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.037 [2024-11-19 17:19:00.455448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.037 [2024-11-19 17:19:00.467604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.037 [2024-11-19 17:19:00.467641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.037 [2024-11-19 17:19:00.467653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.037 [2024-11-19 17:19:00.478274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.037 [2024-11-19 17:19:00.478310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.037 [2024-11-19 17:19:00.478324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.037 [2024-11-19 17:19:00.489651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.037 [2024-11-19 17:19:00.489687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.037 [2024-11-19 17:19:00.489699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.037 [2024-11-19 17:19:00.499883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.037 [2024-11-19 17:19:00.499918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.037 [2024-11-19 17:19:00.499931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.037 [2024-11-19 17:19:00.510528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.037 [2024-11-19 17:19:00.510563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.037 [2024-11-19 17:19:00.510575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.037 [2024-11-19 17:19:00.520383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.037 [2024-11-19 17:19:00.520418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.037 [2024-11-19 17:19:00.520446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.037 [2024-11-19 17:19:00.531501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.037 [2024-11-19 17:19:00.531680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.037 [2024-11-19 17:19:00.531697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.037 [2024-11-19 17:19:00.543046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.037 [2024-11-19 17:19:00.543232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.037 [2024-11-19 17:19:00.543249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.037 [2024-11-19 17:19:00.553125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.037 [2024-11-19 17:19:00.553161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.037 [2024-11-19 17:19:00.553173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.037 [2024-11-19 17:19:00.564474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.037 [2024-11-19 17:19:00.564508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.037 [2024-11-19 17:19:00.564520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.295 [2024-11-19 17:19:00.576656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.295 [2024-11-19 17:19:00.576691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.295 [2024-11-19 17:19:00.576704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.295 [2024-11-19 17:19:00.585821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.295 [2024-11-19 17:19:00.585855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.295 [2024-11-19 17:19:00.585867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.295 [2024-11-19 17:19:00.597279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.295 [2024-11-19 17:19:00.597449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.295 [2024-11-19 17:19:00.597477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.295 [2024-11-19 17:19:00.608745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.295 [2024-11-19 17:19:00.608925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.295 [2024-11-19 17:19:00.608953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.295 [2024-11-19 17:19:00.619633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.295 [2024-11-19 17:19:00.619669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.295 [2024-11-19 17:19:00.619681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.295 [2024-11-19 17:19:00.631501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.295 [2024-11-19 17:19:00.631551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.295 [2024-11-19 17:19:00.631579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.295 [2024-11-19 17:19:00.642337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.295 [2024-11-19 17:19:00.642372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.295 [2024-11-19 17:19:00.642384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.295 [2024-11-19 17:19:00.654327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.295 [2024-11-19 17:19:00.654361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.295 [2024-11-19 17:19:00.654373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.295 [2024-11-19 17:19:00.663870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.295 [2024-11-19 17:19:00.663904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.295 [2024-11-19 17:19:00.663916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.295 [2024-11-19 17:19:00.675075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.295 [2024-11-19 17:19:00.675244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.295 [2024-11-19 17:19:00.675261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.295 [2024-11-19 17:19:00.687601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.295 [2024-11-19 17:19:00.687637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.295 [2024-11-19 17:19:00.687650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.295 [2024-11-19 17:19:00.698639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.295 [2024-11-19 17:19:00.698819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.295 [2024-11-19 17:19:00.698836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.295 [2024-11-19 17:19:00.710088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.295 [2024-11-19 17:19:00.710273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.295 [2024-11-19 17:19:00.710429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.295 [2024-11-19 17:19:00.722367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.295 [2024-11-19 17:19:00.722572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.295 [2024-11-19 17:19:00.722702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.295 [2024-11-19 17:19:00.734171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.295 [2024-11-19 17:19:00.734404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.295 [2024-11-19 17:19:00.734518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.295 [2024-11-19 17:19:00.744787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.295 [2024-11-19 17:19:00.744974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.295 [2024-11-19 17:19:00.745142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.296 [2024-11-19 17:19:00.756585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.296 [2024-11-19 17:19:00.756786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.296 [2024-11-19 17:19:00.756932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.296 [2024-11-19 17:19:00.767711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.296 [2024-11-19 17:19:00.767916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.296 [2024-11-19 17:19:00.768078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.296 [2024-11-19 17:19:00.778938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.296 [2024-11-19 17:19:00.779137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.296 [2024-11-19 17:19:00.779373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.296 [2024-11-19 17:19:00.789315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.296 [2024-11-19 17:19:00.789503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.296 [2024-11-19 17:19:00.789636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.296 [2024-11-19 17:19:00.800831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.296 [2024-11-19 17:19:00.801034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.296 [2024-11-19 17:19:00.801151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.296 [2024-11-19 17:19:00.813165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.296 [2024-11-19 17:19:00.813372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.296 [2024-11-19 17:19:00.813473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.296 [2024-11-19 17:19:00.824438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.296 [2024-11-19 17:19:00.824656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.296 [2024-11-19 17:19:00.824688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.554 [2024-11-19 17:19:00.836852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.554 [2024-11-19 17:19:00.836888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.554 [2024-11-19 17:19:00.836916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.554 [2024-11-19 17:19:00.848009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.554 [2024-11-19 17:19:00.848223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.555 [2024-11-19 17:19:00.848241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.555 [2024-11-19 17:19:00.858129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.555 [2024-11-19 17:19:00.858164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.555 [2024-11-19 17:19:00.858176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.555 [2024-11-19 17:19:00.869203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.555 [2024-11-19 17:19:00.869238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.555 [2024-11-19 17:19:00.869251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.555 [2024-11-19 17:19:00.880404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.555 [2024-11-19 17:19:00.880439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.555 [2024-11-19 17:19:00.880451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.555 [2024-11-19 17:19:00.891389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.555 [2024-11-19 17:19:00.891555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.555 [2024-11-19 17:19:00.891572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.555 [2024-11-19 17:19:00.902850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.555 [2024-11-19 17:19:00.902886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.555 [2024-11-19 17:19:00.902914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.555 [2024-11-19 17:19:00.913406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.555 [2024-11-19 17:19:00.913441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.555 [2024-11-19 17:19:00.913454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.555 [2024-11-19 17:19:00.924223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.555 [2024-11-19 17:19:00.924260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.555 [2024-11-19 17:19:00.924288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.555 [2024-11-19 17:19:00.935870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.555 [2024-11-19 17:19:00.935904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.555 [2024-11-19 17:19:00.935916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.555 [2024-11-19 17:19:00.947355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.555 [2024-11-19 17:19:00.947391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.555 [2024-11-19 17:19:00.947420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.555 [2024-11-19 17:19:00.957307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.555 [2024-11-19 17:19:00.957475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.555 [2024-11-19 17:19:00.957491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.555 [2024-11-19 17:19:00.969225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.555 [2024-11-19 17:19:00.969260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.555 [2024-11-19 17:19:00.969273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.555 [2024-11-19 17:19:00.979963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.555 [2024-11-19 17:19:00.980166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.555 [2024-11-19 17:19:00.980183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.555 [2024-11-19 17:19:00.991352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.555 [2024-11-19 17:19:00.991517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.555 [2024-11-19 17:19:00.991534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.555 [2024-11-19 17:19:01.002422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.555 [2024-11-19 17:19:01.002593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.555 [2024-11-19 17:19:01.002778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.555 [2024-11-19 17:19:01.013922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.555 [2024-11-19 17:19:01.014107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.555 [2024-11-19 17:19:01.014224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.555 [2024-11-19 17:19:01.025752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.555 [2024-11-19 17:19:01.025938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.555 [2024-11-19 17:19:01.026073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.555 [2024-11-19 17:19:01.037692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.555 [2024-11-19 17:19:01.037905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.555 [2024-11-19 17:19:01.038022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.555 [2024-11-19 17:19:01.050208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.555 [2024-11-19 17:19:01.050375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.555 [2024-11-19 17:19:01.050546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.555 [2024-11-19 17:19:01.060097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.555 [2024-11-19 17:19:01.060302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.555 [2024-11-19 17:19:01.060491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.555 [2024-11-19 17:19:01.071835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.555 [2024-11-19 17:19:01.072004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.555 [2024-11-19 17:19:01.072166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.555 [2024-11-19 17:19:01.083978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.555 [2024-11-19 17:19:01.084233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.555 [2024-11-19 17:19:01.084421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.814 [2024-11-19 17:19:01.096508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.814 [2024-11-19 17:19:01.096679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.814 [2024-11-19 17:19:01.096944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.814 [2024-11-19 17:19:01.108516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.814 [2024-11-19 17:19:01.108699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.814 [2024-11-19 17:19:01.108879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.814 [2024-11-19 17:19:01.120267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.814 [2024-11-19 17:19:01.120473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.814 [2024-11-19 17:19:01.120614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.814 [2024-11-19 17:19:01.132397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.814 [2024-11-19 17:19:01.132432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.814 [2024-11-19 17:19:01.132461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.814 [2024-11-19 17:19:01.142605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.815 [2024-11-19 17:19:01.142641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.815 [2024-11-19 17:19:01.142653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.815 [2024-11-19 17:19:01.153792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.815 [2024-11-19 17:19:01.153827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.815 [2024-11-19 17:19:01.153839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.815 [2024-11-19 17:19:01.165903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.815 [2024-11-19 17:19:01.165940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.815 [2024-11-19 17:19:01.165970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.815 [2024-11-19 17:19:01.176231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.815 [2024-11-19 17:19:01.176272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.815 [2024-11-19 17:19:01.176286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.815 [2024-11-19 17:19:01.188103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.815 [2024-11-19 17:19:01.188144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.815 [2024-11-19 17:19:01.188159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.815 [2024-11-19 17:19:01.199838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.815 [2024-11-19 17:19:01.199872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.815 [2024-11-19 17:19:01.199885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.815 [2024-11-19 17:19:01.211914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.815 [2024-11-19 17:19:01.211949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.815 [2024-11-19 17:19:01.211960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.815 [2024-11-19 17:19:01.223532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.815 [2024-11-19 17:19:01.223566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.815 [2024-11-19 17:19:01.223578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.815 [2024-11-19 17:19:01.233762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.815 [2024-11-19 17:19:01.233796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.815 [2024-11-19 17:19:01.233807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.815 [2024-11-19 17:19:01.244494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.815 [2024-11-19 17:19:01.244660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.815 [2024-11-19 17:19:01.244678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.815 [2024-11-19 17:19:01.256089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.815 [2024-11-19 17:19:01.256280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.815 [2024-11-19 17:19:01.256298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.815 [2024-11-19 17:19:01.267991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.815 [2024-11-19 17:19:01.268031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.815 [2024-11-19 17:19:01.268044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.815 [2024-11-19 17:19:01.279304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.815 [2024-11-19 17:19:01.279469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.815 [2024-11-19 17:19:01.279486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.815 [2024-11-19 17:19:01.288879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.815 [2024-11-19 17:19:01.288914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.815 [2024-11-19 17:19:01.288942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.815 [2024-11-19 17:19:01.299694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.815 [2024-11-19 17:19:01.299729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.815 [2024-11-19 17:19:01.299786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.815 [2024-11-19 17:19:01.313011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.815 [2024-11-19 17:19:01.313050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.815 [2024-11-19 17:19:01.313079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.815 [2024-11-19 17:19:01.325816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.815 [2024-11-19 17:19:01.325863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.815 [2024-11-19 17:19:01.325892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.815 [2024-11-19 17:19:01.336728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:08.815 [2024-11-19 17:19:01.336804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.815 [2024-11-19 17:19:01.336819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.074 [2024-11-19 17:19:01.348838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.074 [2024-11-19 17:19:01.348918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.074 [2024-11-19 17:19:01.348947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.074 [2024-11-19 17:19:01.360644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.074 [2024-11-19 17:19:01.360679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.074 [2024-11-19 17:19:01.360691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.074 [2024-11-19 17:19:01.372603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.074 [2024-11-19 17:19:01.372637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.074 [2024-11-19 17:19:01.372649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.074 [2024-11-19 17:19:01.383744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.074 [2024-11-19 17:19:01.383789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.074 [2024-11-19 17:19:01.383819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.074 [2024-11-19 17:19:01.393855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.074 [2024-11-19 17:19:01.393889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.074 [2024-11-19 17:19:01.393901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.074 [2024-11-19 17:19:01.404819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.074 [2024-11-19 17:19:01.405002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.074 [2024-11-19 17:19:01.405037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.074 [2024-11-19 17:19:01.416776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.074 [2024-11-19 17:19:01.416991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.074 [2024-11-19 17:19:01.417010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.074 [2024-11-19 17:19:01.429630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.074 [2024-11-19 17:19:01.429665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.075 [2024-11-19 17:19:01.429677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.075 22384.00 IOPS, 87.44 MiB/s [2024-11-19T17:19:01.608Z] [2024-11-19 17:19:01.443289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.075 [2024-11-19 17:19:01.443325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.075 [2024-11-19 17:19:01.443355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.075 [2024-11-19 17:19:01.495088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.075 [2024-11-19 17:19:01.495124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.075 [2024-11-19 17:19:01.495152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.075 [2024-11-19 17:19:01.506632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.075 [2024-11-19 17:19:01.506668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.075 [2024-11-19 17:19:01.506696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.075 [2024-11-19 17:19:01.518816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.075 [2024-11-19 17:19:01.518849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.075 [2024-11-19 17:19:01.518876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.075 [2024-11-19 17:19:01.530516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.075 [2024-11-19 17:19:01.530551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.075 [2024-11-19 17:19:01.530579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.075 [2024-11-19 17:19:01.542019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.075 [2024-11-19 17:19:01.542054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.075 [2024-11-19 17:19:01.542067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.075 [2024-11-19 17:19:01.551631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.075 [2024-11-19 17:19:01.551667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.075 [2024-11-19 17:19:01.551696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.075 [2024-11-19 17:19:01.564498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.075 [2024-11-19 17:19:01.564681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.075 [2024-11-19 17:19:01.564714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.075 [2024-11-19 17:19:01.576381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.075 [2024-11-19 17:19:01.576583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.075 [2024-11-19 17:19:01.576600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.075 [2024-11-19 17:19:01.587003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.075 [2024-11-19 17:19:01.587184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.075 [2024-11-19 17:19:01.587218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.075 [2024-11-19 17:19:01.597235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.075 [2024-11-19 17:19:01.597271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.075 [2024-11-19 17:19:01.597299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.334 [2024-11-19 17:19:01.610159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.334 [2024-11-19 17:19:01.610354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.334 [2024-11-19 17:19:01.610372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.334 [2024-11-19 17:19:01.621591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.334 [2024-11-19 17:19:01.621784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.334 [2024-11-19 17:19:01.621817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.334 [2024-11-19 17:19:01.631895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.334 [2024-11-19 17:19:01.631930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.334 [2024-11-19 17:19:01.631943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.334 [2024-11-19 17:19:01.643779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.334 [2024-11-19 17:19:01.643813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.334 [2024-11-19 17:19:01.643825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.334 [2024-11-19 17:19:01.654849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.334 [2024-11-19 17:19:01.654883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.334 [2024-11-19 17:19:01.654911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.334 [2024-11-19 17:19:01.664690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.334 [2024-11-19 17:19:01.664725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.334 [2024-11-19 17:19:01.664750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.334 [2024-11-19 17:19:01.676031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.334 [2024-11-19 17:19:01.676066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.334 [2024-11-19 17:19:01.676078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.334 [2024-11-19 17:19:01.687246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.334 [2024-11-19 17:19:01.687281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.334 [2024-11-19 17:19:01.687294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.334 [2024-11-19 17:19:01.697359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.334 [2024-11-19 17:19:01.697525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.334 [2024-11-19 17:19:01.697542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.334 [2024-11-19 17:19:01.709161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.334 [2024-11-19 17:19:01.709197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.334 [2024-11-19 17:19:01.709209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.334 [2024-11-19 17:19:01.720099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.334 [2024-11-19 17:19:01.720286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.334 [2024-11-19 17:19:01.720303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.334 [2024-11-19 17:19:01.731182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.334 [2024-11-19 17:19:01.731348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.334 [2024-11-19 17:19:01.731365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.334 [2024-11-19 17:19:01.741183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.334 [2024-11-19 17:19:01.741218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.334 [2024-11-19 17:19:01.741230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.334 [2024-11-19 17:19:01.753892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.334 [2024-11-19 17:19:01.753927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.334 [2024-11-19 17:19:01.753939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.334 [2024-11-19 17:19:01.764884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.334 [2024-11-19 17:19:01.765065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.334 [2024-11-19 17:19:01.765084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.334 [2024-11-19 17:19:01.777310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.334 [2024-11-19 17:19:01.777346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.334 [2024-11-19 17:19:01.777359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.334 [2024-11-19 17:19:01.788227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.334 [2024-11-19 17:19:01.788415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.334 [2024-11-19 17:19:01.788432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.335 [2024-11-19 17:19:01.798874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.335 [2024-11-19 17:19:01.798909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.335 [2024-11-19 17:19:01.798921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.335 [2024-11-19 17:19:01.809449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.335 [2024-11-19 17:19:01.809484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.335 [2024-11-19 17:19:01.809496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.335 [2024-11-19 17:19:01.820224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.335 [2024-11-19 17:19:01.820258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.335 [2024-11-19 17:19:01.820270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.335 [2024-11-19 17:19:01.831382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.335 [2024-11-19 17:19:01.831417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.335 [2024-11-19 17:19:01.831429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.335 [2024-11-19 17:19:01.841889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.335 [2024-11-19 17:19:01.842037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.335 [2024-11-19 17:19:01.842070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.335 [2024-11-19 17:19:01.852851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.335 [2024-11-19 17:19:01.852886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.335 [2024-11-19 17:19:01.852898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.335 [2024-11-19 17:19:01.864002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.335 [2024-11-19 17:19:01.864211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.335 [2024-11-19 17:19:01.864227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.593 [2024-11-19 17:19:01.873870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.593 [2024-11-19 17:19:01.873905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.593 [2024-11-19 17:19:01.873917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.593 [2024-11-19 17:19:01.886943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.593 [2024-11-19 17:19:01.887109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.593 [2024-11-19 17:19:01.887125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.593 [2024-11-19 17:19:01.897102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.593 [2024-11-19 17:19:01.897138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.593 [2024-11-19 17:19:01.897150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.593 [2024-11-19 17:19:01.908184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.593 [2024-11-19 17:19:01.908219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.593 [2024-11-19 17:19:01.908231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.593 [2024-11-19 17:19:01.919607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.593 [2024-11-19 17:19:01.919643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.593 [2024-11-19 17:19:01.919655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.593 [2024-11-19 17:19:01.929158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.593 [2024-11-19 17:19:01.929193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.593 [2024-11-19 17:19:01.929220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.593 [2024-11-19 17:19:01.939600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.593 [2024-11-19 17:19:01.939636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.593 [2024-11-19 17:19:01.939648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.593 [2024-11-19 17:19:01.951876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.593 [2024-11-19 17:19:01.951910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.593 [2024-11-19 17:19:01.951922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.593 [2024-11-19 17:19:01.961650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.593 [2024-11-19 17:19:01.961684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.594 [2024-11-19 17:19:01.961696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.594 [2024-11-19 17:19:01.973045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.594 [2024-11-19 17:19:01.973080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.594 [2024-11-19 17:19:01.973093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.594 [2024-11-19 17:19:01.983706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.594 [2024-11-19 17:19:01.983751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.594 [2024-11-19 17:19:01.983780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.594 [2024-11-19 17:19:01.996507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.594 [2024-11-19 17:19:01.996542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.594 [2024-11-19 17:19:01.996554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.594 [2024-11-19 17:19:02.006672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.594 [2024-11-19 17:19:02.006853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.594 [2024-11-19 17:19:02.006869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.594 [2024-11-19 17:19:02.016765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.594 [2024-11-19 17:19:02.016802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.594 [2024-11-19 17:19:02.016814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.594 [2024-11-19 17:19:02.028219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.594 [2024-11-19 17:19:02.028254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.594 [2024-11-19 17:19:02.028266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.594 [2024-11-19 17:19:02.040421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.594 [2024-11-19 17:19:02.040456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.594 [2024-11-19 17:19:02.040468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.594 [2024-11-19 17:19:02.050485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.594 [2024-11-19 17:19:02.050519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.594 [2024-11-19 17:19:02.050531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.594 [2024-11-19 17:19:02.060651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.594 [2024-11-19 17:19:02.060686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.594 [2024-11-19 17:19:02.060697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.594 [2024-11-19 17:19:02.072706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.594 [2024-11-19 17:19:02.072751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.594 [2024-11-19 17:19:02.072781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.594 [2024-11-19 17:19:02.082842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.594 [2024-11-19 17:19:02.082875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.594 [2024-11-19 17:19:02.082887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.594 [2024-11-19 17:19:02.093474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.594 [2024-11-19 17:19:02.093641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.594 [2024-11-19 17:19:02.093658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.594 [2024-11-19 17:19:02.105434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.594 [2024-11-19 17:19:02.105601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.594 [2024-11-19 17:19:02.105618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.594 [2024-11-19 17:19:02.115569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.594 [2024-11-19 17:19:02.115604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.594 [2024-11-19 17:19:02.115617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.852 [2024-11-19 17:19:02.128434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.852 [2024-11-19 17:19:02.128470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.852 [2024-11-19 17:19:02.128483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.852 [2024-11-19 17:19:02.139046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.852 [2024-11-19 17:19:02.139081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.852 [2024-11-19 17:19:02.139094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.852 [2024-11-19 17:19:02.149972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.852 [2024-11-19 17:19:02.150007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.852 [2024-11-19 17:19:02.150019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.852 [2024-11-19 17:19:02.161307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.852 [2024-11-19 17:19:02.161458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.852 [2024-11-19 17:19:02.161490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.852 [2024-11-19 17:19:02.173904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.852 [2024-11-19 17:19:02.173939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.852 [2024-11-19 17:19:02.173951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.852 [2024-11-19 17:19:02.183008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.852 [2024-11-19 17:19:02.183043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.852 [2024-11-19 17:19:02.183056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.852 [2024-11-19 17:19:02.194773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.852 [2024-11-19 17:19:02.194807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.852 [2024-11-19 17:19:02.194820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.852 [2024-11-19 17:19:02.206636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.852 [2024-11-19 17:19:02.206672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.852 [2024-11-19 17:19:02.206684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.852 [2024-11-19 17:19:02.217726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.852 [2024-11-19 17:19:02.217771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.852 [2024-11-19 17:19:02.217784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.852 [2024-11-19 17:19:02.227685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.852 [2024-11-19 17:19:02.227720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.852 [2024-11-19 17:19:02.227749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.852 [2024-11-19 17:19:02.238876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.852 [2024-11-19 17:19:02.238911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.852 [2024-11-19 17:19:02.238923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.852 [2024-11-19 17:19:02.250111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.852 [2024-11-19 17:19:02.250145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.852 [2024-11-19 17:19:02.250157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.852 [2024-11-19 17:19:02.260885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.852 [2024-11-19 17:19:02.260918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.852 [2024-11-19 17:19:02.260946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.852 [2024-11-19 17:19:02.272781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.852 [2024-11-19 17:19:02.272992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.852 [2024-11-19 17:19:02.273024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.852 [2024-11-19 17:19:02.283560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.852 [2024-11-19 17:19:02.283596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.853 [2024-11-19 17:19:02.283625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.853 [2024-11-19 17:19:02.295735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.853 [2024-11-19 17:19:02.295782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.853 [2024-11-19 17:19:02.295794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.853 [2024-11-19 17:19:02.304618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.853 [2024-11-19 17:19:02.304655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.853 [2024-11-19 17:19:02.304667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.853 [2024-11-19 17:19:02.314582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.853 [2024-11-19 17:19:02.314617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.853 [2024-11-19 17:19:02.314630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.853 [2024-11-19 17:19:02.326852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.853 [2024-11-19 17:19:02.326887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.853 [2024-11-19 17:19:02.326899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.853 [2024-11-19 17:19:02.338208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.853 [2024-11-19 17:19:02.338412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.853 [2024-11-19 17:19:02.338430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.853 [2024-11-19 17:19:02.348319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.853 [2024-11-19 17:19:02.348419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.853 [2024-11-19 17:19:02.348447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.853 [2024-11-19 17:19:02.361098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.853 [2024-11-19 17:19:02.361294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.853 [2024-11-19 17:19:02.361312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.853 [2024-11-19 17:19:02.372707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:09.853 [2024-11-19 17:19:02.372899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.853 [2024-11-19 17:19:02.372932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.111 [2024-11-19 17:19:02.384452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:10.111 [2024-11-19 17:19:02.384616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.111 [2024-11-19 17:19:02.384633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.111 [2024-11-19 17:19:02.396785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:10.111 [2024-11-19 17:19:02.396830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.111 [2024-11-19 17:19:02.396843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.111 [2024-11-19 17:19:02.406330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:10.111 [2024-11-19 17:19:02.406366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.111 [2024-11-19 17:19:02.406378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.111 [2024-11-19 17:19:02.418653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:10.111 [2024-11-19 17:19:02.418689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.111 [2024-11-19 17:19:02.418702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.111 [2024-11-19 17:19:02.430093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:10.111 [2024-11-19 17:19:02.430127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.111 [2024-11-19 17:19:02.430139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.111 22162.00 IOPS, 86.57 MiB/s [2024-11-19T17:19:02.644Z] [2024-11-19 17:19:02.440804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb8540) 00:20:10.111 [2024-11-19 17:19:02.440845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.111 [2024-11-19 17:19:02.440859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.111 00:20:10.111 Latency(us) 00:20:10.111 [2024-11-19T17:19:02.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.111 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:10.111 nvme0n1 : 2.00 22175.72 86.62 0.00 0.00 5765.50 2829.96 52190.49 00:20:10.111 [2024-11-19T17:19:02.644Z] =================================================================================================================== 00:20:10.111 [2024-11-19T17:19:02.644Z] Total : 22175.72 86.62 0.00 0.00 5765.50 2829.96 52190.49 00:20:10.111 { 00:20:10.111 "results": [ 00:20:10.111 { 00:20:10.111 "job": "nvme0n1", 00:20:10.111 "core_mask": "0x2", 00:20:10.111 "workload": "randread", 00:20:10.111 "status": "finished", 00:20:10.111 "queue_depth": 128, 00:20:10.111 "io_size": 4096, 00:20:10.111 "runtime": 2.004535, 00:20:10.111 "iops": 22175.71656269409, 00:20:10.111 "mibps": 86.6238928230238, 00:20:10.111 "io_failed": 0, 00:20:10.111 "io_timeout": 0, 00:20:10.111 "avg_latency_us": 5765.496702796889, 00:20:10.111 "min_latency_us": 2829.9636363636364, 00:20:10.111 "max_latency_us": 52190.487272727274 00:20:10.111 } 00:20:10.111 ], 00:20:10.111 "core_count": 1 00:20:10.111 } 00:20:10.111 17:19:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:10.111 17:19:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:10.111 17:19:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:10.111 | .driver_specific 00:20:10.111 | .nvme_error 00:20:10.111 | .status_code 00:20:10.111 | .command_transient_transport_error' 00:20:10.111 17:19:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:10.370 17:19:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 174 > 0 )) 00:20:10.370 17:19:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94544 00:20:10.370 17:19:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94544 ']' 00:20:10.370 17:19:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94544 00:20:10.370 17:19:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:10.370 17:19:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.370 17:19:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94544 00:20:10.370 killing process with pid 94544 00:20:10.370 Received shutdown signal, test time was about 2.000000 seconds 00:20:10.370 00:20:10.370 Latency(us) 00:20:10.370 [2024-11-19T17:19:02.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.370 [2024-11-19T17:19:02.903Z] =================================================================================================================== 00:20:10.370 [2024-11-19T17:19:02.903Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:10.370 17:19:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:10.370 17:19:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:10.370 17:19:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94544' 00:20:10.370 17:19:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94544 00:20:10.370 17:19:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94544 00:20:10.628 17:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:20:10.628 17:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:10.628 17:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:10.628 17:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:10.628 17:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:10.628 17:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94621 00:20:10.628 17:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94621 /var/tmp/bperf.sock 00:20:10.628 17:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:20:10.628 17:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94621 ']' 00:20:10.628 17:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:10.628 17:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:10.628 17:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:10.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:10.628 17:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:10.628 17:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:10.628 [2024-11-19 17:19:03.045727] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:20:10.628 [2024-11-19 17:19:03.046006] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94621 ] 00:20:10.628 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:10.628 Zero copy mechanism will not be used. 00:20:10.887 [2024-11-19 17:19:03.178853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.887 [2024-11-19 17:19:03.226179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.452 17:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:11.452 17:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:11.452 17:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:11.452 17:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:11.709 17:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:11.709 17:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.709 17:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:11.966 17:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.966 17:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:11.966 17:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:12.224 nvme0n1 00:20:12.224 17:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:12.224 17:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.224 17:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:12.224 17:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.224 17:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:12.224 17:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:12.224 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:12.224 Zero copy mechanism will not be used. 00:20:12.224 Running I/O for 2 seconds... 00:20:12.224 [2024-11-19 17:19:04.738432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.224 [2024-11-19 17:19:04.738516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.224 [2024-11-19 17:19:04.738531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.224 [2024-11-19 17:19:04.741658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.224 [2024-11-19 17:19:04.741695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.224 [2024-11-19 17:19:04.741723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.224 [2024-11-19 17:19:04.746231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.224 [2024-11-19 17:19:04.746283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.224 [2024-11-19 17:19:04.746312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.224 [2024-11-19 17:19:04.750592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.224 [2024-11-19 17:19:04.750627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.224 [2024-11-19 17:19:04.750655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.224 [2024-11-19 17:19:04.755627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.224 [2024-11-19 17:19:04.755692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.224 [2024-11-19 17:19:04.755720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.483 [2024-11-19 17:19:04.760128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.483 [2024-11-19 17:19:04.760167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.483 [2024-11-19 17:19:04.760197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.483 [2024-11-19 17:19:04.763216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.484 [2024-11-19 17:19:04.763249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.484 [2024-11-19 17:19:04.763276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.484 [2024-11-19 17:19:04.767569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.484 [2024-11-19 17:19:04.767604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.484 [2024-11-19 17:19:04.767633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.484 [2024-11-19 17:19:04.771433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.484 [2024-11-19 17:19:04.771468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.484 [2024-11-19 17:19:04.771496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.484 [2024-11-19 17:19:04.775077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.484 [2024-11-19 17:19:04.775111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.484 [2024-11-19 17:19:04.775138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.484 [2024-11-19 17:19:04.779284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.484 [2024-11-19 17:19:04.779319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.484 [2024-11-19 17:19:04.779346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.484 [2024-11-19 17:19:04.782491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.484 [2024-11-19 17:19:04.782540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.484 [2024-11-19 17:19:04.782567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.484 [2024-11-19 17:19:04.786306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.484 [2024-11-19 17:19:04.786356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.484 [2024-11-19 17:19:04.786384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.484 [2024-11-19 17:19:04.790452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.484 [2024-11-19 17:19:04.790485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.484 [2024-11-19 17:19:04.790513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.484 [2024-11-19 17:19:04.793801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.484 [2024-11-19 17:19:04.793851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.484 [2024-11-19 17:19:04.793878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.484 [2024-11-19 17:19:04.798179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.484 [2024-11-19 17:19:04.798229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.484 [2024-11-19 17:19:04.798257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.484 [2024-11-19 17:19:04.802164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.484 [2024-11-19 17:19:04.802199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.484 [2024-11-19 17:19:04.802227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.484 [2024-11-19 17:19:04.805591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.484 [2024-11-19 17:19:04.805625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.484 [2024-11-19 17:19:04.805653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.484 [2024-11-19 17:19:04.809389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.484 [2024-11-19 17:19:04.809424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.484 [2024-11-19 17:19:04.809451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.484 [2024-11-19 17:19:04.813351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.484 [2024-11-19 17:19:04.813386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.484 [2024-11-19 17:19:04.813413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.484 [2024-11-19 17:19:04.816905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.484 [2024-11-19 17:19:04.816939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.484 [2024-11-19 17:19:04.816966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.484 [2024-11-19 17:19:04.821038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.484 [2024-11-19 17:19:04.821073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.484 [2024-11-19 17:19:04.821100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.484 [2024-11-19 17:19:04.825503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.484 [2024-11-19 17:19:04.825568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.484 [2024-11-19 17:19:04.825595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.484 [2024-11-19 17:19:04.829565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.484 [2024-11-19 17:19:04.829600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.484 [2024-11-19 17:19:04.829627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.484 [2024-11-19 17:19:04.833132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.484 [2024-11-19 17:19:04.833164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.484 [2024-11-19 17:19:04.833192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.484 [2024-11-19 17:19:04.837031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.484 [2024-11-19 17:19:04.837064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.484 [2024-11-19 17:19:04.837091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.484 [2024-11-19 17:19:04.841413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.484 [2024-11-19 17:19:04.841448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.484 [2024-11-19 17:19:04.841476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.484 [2024-11-19 17:19:04.845753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.484 [2024-11-19 17:19:04.845786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.484 [2024-11-19 17:19:04.845814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.484 [2024-11-19 17:19:04.849536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.484 [2024-11-19 17:19:04.849569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.484 [2024-11-19 17:19:04.849597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.484 [2024-11-19 17:19:04.852454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.484 [2024-11-19 17:19:04.852486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.484 [2024-11-19 17:19:04.852512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.484 [2024-11-19 17:19:04.856735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.484 [2024-11-19 17:19:04.856778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.484 [2024-11-19 17:19:04.856806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.484 [2024-11-19 17:19:04.859693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.484 [2024-11-19 17:19:04.859727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.484 [2024-11-19 17:19:04.859768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.484 [2024-11-19 17:19:04.863981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.484 [2024-11-19 17:19:04.864015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.484 [2024-11-19 17:19:04.864065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.485 [2024-11-19 17:19:04.868259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.485 [2024-11-19 17:19:04.868311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.485 [2024-11-19 17:19:04.868340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.485 [2024-11-19 17:19:04.872738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.485 [2024-11-19 17:19:04.872780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.485 [2024-11-19 17:19:04.872808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.485 [2024-11-19 17:19:04.876777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.485 [2024-11-19 17:19:04.876821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.485 [2024-11-19 17:19:04.876849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.485 [2024-11-19 17:19:04.879514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.485 [2024-11-19 17:19:04.879548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.485 [2024-11-19 17:19:04.879576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.485 [2024-11-19 17:19:04.883702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.485 [2024-11-19 17:19:04.883791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.485 [2024-11-19 17:19:04.883805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.485 [2024-11-19 17:19:04.888134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.485 [2024-11-19 17:19:04.888187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.485 [2024-11-19 17:19:04.888216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.485 [2024-11-19 17:19:04.892098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.485 [2024-11-19 17:19:04.892150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.485 [2024-11-19 17:19:04.892193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.485 [2024-11-19 17:19:04.895415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.485 [2024-11-19 17:19:04.895449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.485 [2024-11-19 17:19:04.895477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.485 [2024-11-19 17:19:04.899479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.485 [2024-11-19 17:19:04.899514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.485 [2024-11-19 17:19:04.899541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.485 [2024-11-19 17:19:04.903651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.485 [2024-11-19 17:19:04.903685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.485 [2024-11-19 17:19:04.903712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.485 [2024-11-19 17:19:04.907842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.485 [2024-11-19 17:19:04.907892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.485 [2024-11-19 17:19:04.907920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.485 [2024-11-19 17:19:04.912212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.485 [2024-11-19 17:19:04.912265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.485 [2024-11-19 17:19:04.912279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.485 [2024-11-19 17:19:04.915071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.485 [2024-11-19 17:19:04.915105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.485 [2024-11-19 17:19:04.915132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.485 [2024-11-19 17:19:04.918992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.485 [2024-11-19 17:19:04.919026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.485 [2024-11-19 17:19:04.919054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.485 [2024-11-19 17:19:04.922676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.485 [2024-11-19 17:19:04.922710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.485 [2024-11-19 17:19:04.922737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.485 [2024-11-19 17:19:04.926602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.485 [2024-11-19 17:19:04.926636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.485 [2024-11-19 17:19:04.926663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.485 [2024-11-19 17:19:04.930719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.485 [2024-11-19 17:19:04.930762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.485 [2024-11-19 17:19:04.930790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.485 [2024-11-19 17:19:04.935228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.485 [2024-11-19 17:19:04.935262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.485 [2024-11-19 17:19:04.935290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.485 [2024-11-19 17:19:04.939252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.485 [2024-11-19 17:19:04.939286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.485 [2024-11-19 17:19:04.939313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.485 [2024-11-19 17:19:04.941917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.485 [2024-11-19 17:19:04.941966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.485 [2024-11-19 17:19:04.941995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.485 [2024-11-19 17:19:04.946384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.485 [2024-11-19 17:19:04.946435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.485 [2024-11-19 17:19:04.946463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.485 [2024-11-19 17:19:04.950686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.485 [2024-11-19 17:19:04.950722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.485 [2024-11-19 17:19:04.950749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.485 [2024-11-19 17:19:04.955056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.485 [2024-11-19 17:19:04.955090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.485 [2024-11-19 17:19:04.955117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.485 [2024-11-19 17:19:04.957942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.485 [2024-11-19 17:19:04.957975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.485 [2024-11-19 17:19:04.958003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.485 [2024-11-19 17:19:04.961782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.485 [2024-11-19 17:19:04.961831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.485 [2024-11-19 17:19:04.961858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.485 [2024-11-19 17:19:04.965677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.485 [2024-11-19 17:19:04.965711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.485 [2024-11-19 17:19:04.965738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.485 [2024-11-19 17:19:04.969317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.485 [2024-11-19 17:19:04.969351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.485 [2024-11-19 17:19:04.969379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.486 [2024-11-19 17:19:04.972995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.486 [2024-11-19 17:19:04.973029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.486 [2024-11-19 17:19:04.973056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.486 [2024-11-19 17:19:04.977027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.486 [2024-11-19 17:19:04.977062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.486 [2024-11-19 17:19:04.977089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.486 [2024-11-19 17:19:04.980736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.486 [2024-11-19 17:19:04.980782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.486 [2024-11-19 17:19:04.980810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.486 [2024-11-19 17:19:04.984365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.486 [2024-11-19 17:19:04.984399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.486 [2024-11-19 17:19:04.984427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.486 [2024-11-19 17:19:04.988873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.486 [2024-11-19 17:19:04.988923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.486 [2024-11-19 17:19:04.988951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.486 [2024-11-19 17:19:04.991999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.486 [2024-11-19 17:19:04.992056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.486 [2024-11-19 17:19:04.992084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.486 [2024-11-19 17:19:04.995874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.486 [2024-11-19 17:19:04.995941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.486 [2024-11-19 17:19:04.995970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.486 [2024-11-19 17:19:05.000160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.486 [2024-11-19 17:19:05.000212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.486 [2024-11-19 17:19:05.000240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.486 [2024-11-19 17:19:05.004578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.486 [2024-11-19 17:19:05.004614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.486 [2024-11-19 17:19:05.004641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.486 [2024-11-19 17:19:05.007430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.486 [2024-11-19 17:19:05.007463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.486 [2024-11-19 17:19:05.007490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.486 [2024-11-19 17:19:05.011618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.486 [2024-11-19 17:19:05.011653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.486 [2024-11-19 17:19:05.011681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.745 [2024-11-19 17:19:05.016190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.745 [2024-11-19 17:19:05.016228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.745 [2024-11-19 17:19:05.016257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.745 [2024-11-19 17:19:05.020599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.745 [2024-11-19 17:19:05.020635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.745 [2024-11-19 17:19:05.020663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.746 [2024-11-19 17:19:05.024791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.746 [2024-11-19 17:19:05.024836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.746 [2024-11-19 17:19:05.024864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.746 [2024-11-19 17:19:05.028711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.746 [2024-11-19 17:19:05.028757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.746 [2024-11-19 17:19:05.028786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.746 [2024-11-19 17:19:05.031560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.746 [2024-11-19 17:19:05.031593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.746 [2024-11-19 17:19:05.031620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.746 [2024-11-19 17:19:05.035869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.746 [2024-11-19 17:19:05.035920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.746 [2024-11-19 17:19:05.035947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.746 [2024-11-19 17:19:05.040371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.746 [2024-11-19 17:19:05.040404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.746 [2024-11-19 17:19:05.040432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.746 [2024-11-19 17:19:05.045306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.746 [2024-11-19 17:19:05.045356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.746 [2024-11-19 17:19:05.045384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.746 [2024-11-19 17:19:05.049891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.746 [2024-11-19 17:19:05.049927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.746 [2024-11-19 17:19:05.049955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.746 [2024-11-19 17:19:05.054288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.746 [2024-11-19 17:19:05.054318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.746 [2024-11-19 17:19:05.054346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.746 [2024-11-19 17:19:05.058284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.746 [2024-11-19 17:19:05.058317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.746 [2024-11-19 17:19:05.058346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.746 [2024-11-19 17:19:05.061534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.746 [2024-11-19 17:19:05.061568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.746 [2024-11-19 17:19:05.061596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.746 [2024-11-19 17:19:05.066175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.746 [2024-11-19 17:19:05.066209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.746 [2024-11-19 17:19:05.066237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.746 [2024-11-19 17:19:05.070622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.746 [2024-11-19 17:19:05.070655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.746 [2024-11-19 17:19:05.070682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.746 [2024-11-19 17:19:05.075397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.746 [2024-11-19 17:19:05.075430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.746 [2024-11-19 17:19:05.075457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.746 [2024-11-19 17:19:05.080147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.746 [2024-11-19 17:19:05.080185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.746 [2024-11-19 17:19:05.080199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.746 [2024-11-19 17:19:05.084527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.746 [2024-11-19 17:19:05.084561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.746 [2024-11-19 17:19:05.084589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.746 [2024-11-19 17:19:05.087329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.746 [2024-11-19 17:19:05.087360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.746 [2024-11-19 17:19:05.087387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.746 [2024-11-19 17:19:05.091963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.746 [2024-11-19 17:19:05.091998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.746 [2024-11-19 17:19:05.092049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.746 [2024-11-19 17:19:05.096245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.746 [2024-11-19 17:19:05.096283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.746 [2024-11-19 17:19:05.096297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.746 [2024-11-19 17:19:05.100426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.746 [2024-11-19 17:19:05.100460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.746 [2024-11-19 17:19:05.100503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.746 [2024-11-19 17:19:05.104895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.746 [2024-11-19 17:19:05.104927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.746 [2024-11-19 17:19:05.104955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.746 [2024-11-19 17:19:05.108825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.746 [2024-11-19 17:19:05.108855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.746 [2024-11-19 17:19:05.108883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.746 [2024-11-19 17:19:05.112447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.746 [2024-11-19 17:19:05.112495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.746 [2024-11-19 17:19:05.112523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.746 [2024-11-19 17:19:05.115591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.746 [2024-11-19 17:19:05.115624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.746 [2024-11-19 17:19:05.115651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.746 [2024-11-19 17:19:05.119735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.746 [2024-11-19 17:19:05.119778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.746 [2024-11-19 17:19:05.119806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.746 [2024-11-19 17:19:05.123823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.746 [2024-11-19 17:19:05.123856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.746 [2024-11-19 17:19:05.123882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.746 [2024-11-19 17:19:05.128033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.746 [2024-11-19 17:19:05.128097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.746 [2024-11-19 17:19:05.128125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.746 [2024-11-19 17:19:05.131944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.747 [2024-11-19 17:19:05.131975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.747 [2024-11-19 17:19:05.132002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.747 [2024-11-19 17:19:05.135240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.747 [2024-11-19 17:19:05.135270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.747 [2024-11-19 17:19:05.135297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.747 [2024-11-19 17:19:05.139178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.747 [2024-11-19 17:19:05.139210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.747 [2024-11-19 17:19:05.139237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.747 [2024-11-19 17:19:05.143738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.747 [2024-11-19 17:19:05.143781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.747 [2024-11-19 17:19:05.143809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.747 [2024-11-19 17:19:05.148239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.747 [2024-11-19 17:19:05.148275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.747 [2024-11-19 17:19:05.148289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.747 [2024-11-19 17:19:05.152604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.747 [2024-11-19 17:19:05.152636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.747 [2024-11-19 17:19:05.152663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.747 [2024-11-19 17:19:05.155549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.747 [2024-11-19 17:19:05.155582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.747 [2024-11-19 17:19:05.155609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.747 [2024-11-19 17:19:05.159677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.747 [2024-11-19 17:19:05.159710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.747 [2024-11-19 17:19:05.159738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.747 [2024-11-19 17:19:05.164294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.747 [2024-11-19 17:19:05.164344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.747 [2024-11-19 17:19:05.164386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.747 [2024-11-19 17:19:05.167594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.747 [2024-11-19 17:19:05.167639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.747 [2024-11-19 17:19:05.167650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.747 [2024-11-19 17:19:05.171684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.747 [2024-11-19 17:19:05.171731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.747 [2024-11-19 17:19:05.171742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.747 [2024-11-19 17:19:05.176099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.747 [2024-11-19 17:19:05.176147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.747 [2024-11-19 17:19:05.176159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.747 [2024-11-19 17:19:05.179559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.747 [2024-11-19 17:19:05.179605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.747 [2024-11-19 17:19:05.179616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.747 [2024-11-19 17:19:05.183815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.747 [2024-11-19 17:19:05.183844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.747 [2024-11-19 17:19:05.183856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.747 [2024-11-19 17:19:05.187466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.747 [2024-11-19 17:19:05.187513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.747 [2024-11-19 17:19:05.187524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.747 [2024-11-19 17:19:05.190532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.747 [2024-11-19 17:19:05.190577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.747 [2024-11-19 17:19:05.190588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.747 [2024-11-19 17:19:05.195266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.747 [2024-11-19 17:19:05.195313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.747 [2024-11-19 17:19:05.195324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.747 [2024-11-19 17:19:05.199350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.747 [2024-11-19 17:19:05.199397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.747 [2024-11-19 17:19:05.199409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.747 [2024-11-19 17:19:05.202185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.747 [2024-11-19 17:19:05.202231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.747 [2024-11-19 17:19:05.202242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.747 [2024-11-19 17:19:05.206860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.747 [2024-11-19 17:19:05.206908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.747 [2024-11-19 17:19:05.206920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.747 [2024-11-19 17:19:05.211187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.747 [2024-11-19 17:19:05.211233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.747 [2024-11-19 17:19:05.211244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.747 [2024-11-19 17:19:05.215588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.747 [2024-11-19 17:19:05.215634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.747 [2024-11-19 17:19:05.215645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.747 [2024-11-19 17:19:05.220234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.747 [2024-11-19 17:19:05.220284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.747 [2024-11-19 17:19:05.220297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.747 [2024-11-19 17:19:05.224684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.747 [2024-11-19 17:19:05.224732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.747 [2024-11-19 17:19:05.224744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.747 [2024-11-19 17:19:05.229158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.747 [2024-11-19 17:19:05.229205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.747 [2024-11-19 17:19:05.229231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.747 [2024-11-19 17:19:05.233607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.747 [2024-11-19 17:19:05.233652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.747 [2024-11-19 17:19:05.233663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.747 [2024-11-19 17:19:05.236267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.747 [2024-11-19 17:19:05.236314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.747 [2024-11-19 17:19:05.236326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.748 [2024-11-19 17:19:05.240275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.748 [2024-11-19 17:19:05.240323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.748 [2024-11-19 17:19:05.240335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.748 [2024-11-19 17:19:05.244661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.748 [2024-11-19 17:19:05.244707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.748 [2024-11-19 17:19:05.244719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.748 [2024-11-19 17:19:05.249331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.748 [2024-11-19 17:19:05.249377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.748 [2024-11-19 17:19:05.249388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.748 [2024-11-19 17:19:05.253989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.748 [2024-11-19 17:19:05.254036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.748 [2024-11-19 17:19:05.254048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.748 [2024-11-19 17:19:05.258242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.748 [2024-11-19 17:19:05.258287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.748 [2024-11-19 17:19:05.258299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.748 [2024-11-19 17:19:05.261494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.748 [2024-11-19 17:19:05.261540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.748 [2024-11-19 17:19:05.261551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.748 [2024-11-19 17:19:05.265599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.748 [2024-11-19 17:19:05.265668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.748 [2024-11-19 17:19:05.265695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.748 [2024-11-19 17:19:05.269881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.748 [2024-11-19 17:19:05.269928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.748 [2024-11-19 17:19:05.269939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.748 [2024-11-19 17:19:05.274442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:12.748 [2024-11-19 17:19:05.274491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.748 [2024-11-19 17:19:05.274503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.007 [2024-11-19 17:19:05.278995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.007 [2024-11-19 17:19:05.279041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.007 [2024-11-19 17:19:05.279053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.007 [2024-11-19 17:19:05.283974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.007 [2024-11-19 17:19:05.284018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.007 [2024-11-19 17:19:05.284056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.007 [2024-11-19 17:19:05.288003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.007 [2024-11-19 17:19:05.288073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.007 [2024-11-19 17:19:05.288102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.007 [2024-11-19 17:19:05.290787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.007 [2024-11-19 17:19:05.290832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.007 [2024-11-19 17:19:05.290843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.007 [2024-11-19 17:19:05.295262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.007 [2024-11-19 17:19:05.295294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.007 [2024-11-19 17:19:05.295305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.007 [2024-11-19 17:19:05.299516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.007 [2024-11-19 17:19:05.299547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.008 [2024-11-19 17:19:05.299558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.008 [2024-11-19 17:19:05.303885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.008 [2024-11-19 17:19:05.303916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.008 [2024-11-19 17:19:05.303927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.008 [2024-11-19 17:19:05.307506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.008 [2024-11-19 17:19:05.307536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.008 [2024-11-19 17:19:05.307546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.008 [2024-11-19 17:19:05.310748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.008 [2024-11-19 17:19:05.310778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.008 [2024-11-19 17:19:05.310788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.008 [2024-11-19 17:19:05.314637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.008 [2024-11-19 17:19:05.314668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.008 [2024-11-19 17:19:05.314678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.008 [2024-11-19 17:19:05.319090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.008 [2024-11-19 17:19:05.319121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.008 [2024-11-19 17:19:05.319132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.008 [2024-11-19 17:19:05.323140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.008 [2024-11-19 17:19:05.323170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.008 [2024-11-19 17:19:05.323181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.008 [2024-11-19 17:19:05.326231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.008 [2024-11-19 17:19:05.326260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.008 [2024-11-19 17:19:05.326271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.008 [2024-11-19 17:19:05.329976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.008 [2024-11-19 17:19:05.330007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.008 [2024-11-19 17:19:05.330018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.008 [2024-11-19 17:19:05.334162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.008 [2024-11-19 17:19:05.334192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.008 [2024-11-19 17:19:05.334203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.008 [2024-11-19 17:19:05.338657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.008 [2024-11-19 17:19:05.338688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.008 [2024-11-19 17:19:05.338699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.008 [2024-11-19 17:19:05.341948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.008 [2024-11-19 17:19:05.341994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.008 [2024-11-19 17:19:05.342006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.008 [2024-11-19 17:19:05.345873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.008 [2024-11-19 17:19:05.345903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.008 [2024-11-19 17:19:05.345914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.008 [2024-11-19 17:19:05.349634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.008 [2024-11-19 17:19:05.349665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.008 [2024-11-19 17:19:05.349676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.008 [2024-11-19 17:19:05.353313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.008 [2024-11-19 17:19:05.353343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.008 [2024-11-19 17:19:05.353354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.008 [2024-11-19 17:19:05.356568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.008 [2024-11-19 17:19:05.356598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.008 [2024-11-19 17:19:05.356608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.008 [2024-11-19 17:19:05.359711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.008 [2024-11-19 17:19:05.359751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.008 [2024-11-19 17:19:05.359764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.008 [2024-11-19 17:19:05.363901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.008 [2024-11-19 17:19:05.363932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.008 [2024-11-19 17:19:05.363943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.008 [2024-11-19 17:19:05.367995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.008 [2024-11-19 17:19:05.368033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.008 [2024-11-19 17:19:05.368061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.008 [2024-11-19 17:19:05.372592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.008 [2024-11-19 17:19:05.372622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.008 [2024-11-19 17:19:05.372633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.008 [2024-11-19 17:19:05.375484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.008 [2024-11-19 17:19:05.375514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.008 [2024-11-19 17:19:05.375525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.008 [2024-11-19 17:19:05.379527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.008 [2024-11-19 17:19:05.379557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.008 [2024-11-19 17:19:05.379568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.008 [2024-11-19 17:19:05.383701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.008 [2024-11-19 17:19:05.383732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.008 [2024-11-19 17:19:05.383756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.008 [2024-11-19 17:19:05.388138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.008 [2024-11-19 17:19:05.388172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.008 [2024-11-19 17:19:05.388184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.008 [2024-11-19 17:19:05.392668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.008 [2024-11-19 17:19:05.392699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.008 [2024-11-19 17:19:05.392710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.008 [2024-11-19 17:19:05.395646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.008 [2024-11-19 17:19:05.395676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.008 [2024-11-19 17:19:05.395687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.008 [2024-11-19 17:19:05.399573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.008 [2024-11-19 17:19:05.399619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.008 [2024-11-19 17:19:05.399630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.008 [2024-11-19 17:19:05.404541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.008 [2024-11-19 17:19:05.404588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.009 [2024-11-19 17:19:05.404599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.009 [2024-11-19 17:19:05.408465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.009 [2024-11-19 17:19:05.408527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.009 [2024-11-19 17:19:05.408539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.009 [2024-11-19 17:19:05.412107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.009 [2024-11-19 17:19:05.412144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.009 [2024-11-19 17:19:05.412158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.009 [2024-11-19 17:19:05.416680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.009 [2024-11-19 17:19:05.416713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.009 [2024-11-19 17:19:05.416724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.009 [2024-11-19 17:19:05.421581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.009 [2024-11-19 17:19:05.421643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.009 [2024-11-19 17:19:05.421655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.009 [2024-11-19 17:19:05.425970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.009 [2024-11-19 17:19:05.426000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.009 [2024-11-19 17:19:05.426011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.009 [2024-11-19 17:19:05.428808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.009 [2024-11-19 17:19:05.428862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.009 [2024-11-19 17:19:05.428874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.009 [2024-11-19 17:19:05.432830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.009 [2024-11-19 17:19:05.432875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.009 [2024-11-19 17:19:05.432886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.009 [2024-11-19 17:19:05.437578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.009 [2024-11-19 17:19:05.437609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.009 [2024-11-19 17:19:05.437620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.009 [2024-11-19 17:19:05.440665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.009 [2024-11-19 17:19:05.440696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.009 [2024-11-19 17:19:05.440706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.009 [2024-11-19 17:19:05.444695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.009 [2024-11-19 17:19:05.444742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.009 [2024-11-19 17:19:05.444780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.009 [2024-11-19 17:19:05.449000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.009 [2024-11-19 17:19:05.449046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.009 [2024-11-19 17:19:05.449058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.009 [2024-11-19 17:19:05.453107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.009 [2024-11-19 17:19:05.453138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.009 [2024-11-19 17:19:05.453149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.009 [2024-11-19 17:19:05.456687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.009 [2024-11-19 17:19:05.456718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.009 [2024-11-19 17:19:05.456729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.009 [2024-11-19 17:19:05.459889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.009 [2024-11-19 17:19:05.459919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.009 [2024-11-19 17:19:05.459930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.009 [2024-11-19 17:19:05.464006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.009 [2024-11-19 17:19:05.464043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.009 [2024-11-19 17:19:05.464070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.009 [2024-11-19 17:19:05.467976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.009 [2024-11-19 17:19:05.468022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.009 [2024-11-19 17:19:05.468058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.009 [2024-11-19 17:19:05.472091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.009 [2024-11-19 17:19:05.472144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.009 [2024-11-19 17:19:05.472155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.009 [2024-11-19 17:19:05.474537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.009 [2024-11-19 17:19:05.474567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.009 [2024-11-19 17:19:05.474578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.009 [2024-11-19 17:19:05.478599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.009 [2024-11-19 17:19:05.478630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.009 [2024-11-19 17:19:05.478640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.009 [2024-11-19 17:19:05.482434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.009 [2024-11-19 17:19:05.482464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.009 [2024-11-19 17:19:05.482475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.009 [2024-11-19 17:19:05.485243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.009 [2024-11-19 17:19:05.485273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.009 [2024-11-19 17:19:05.485284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.009 [2024-11-19 17:19:05.489313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.009 [2024-11-19 17:19:05.489343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.009 [2024-11-19 17:19:05.489354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.009 [2024-11-19 17:19:05.493569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.009 [2024-11-19 17:19:05.493600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.009 [2024-11-19 17:19:05.493611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.009 [2024-11-19 17:19:05.497360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.009 [2024-11-19 17:19:05.497390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.009 [2024-11-19 17:19:05.497401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.009 [2024-11-19 17:19:05.500423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.009 [2024-11-19 17:19:05.500469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.009 [2024-11-19 17:19:05.500495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.009 [2024-11-19 17:19:05.504376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.009 [2024-11-19 17:19:05.504438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.009 [2024-11-19 17:19:05.504450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.009 [2024-11-19 17:19:05.508574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.009 [2024-11-19 17:19:05.508604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.010 [2024-11-19 17:19:05.508614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.010 [2024-11-19 17:19:05.513378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.010 [2024-11-19 17:19:05.513424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.010 [2024-11-19 17:19:05.513436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.010 [2024-11-19 17:19:05.517835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.010 [2024-11-19 17:19:05.517867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.010 [2024-11-19 17:19:05.517879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.010 [2024-11-19 17:19:05.521040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.010 [2024-11-19 17:19:05.521072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.010 [2024-11-19 17:19:05.521084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.010 [2024-11-19 17:19:05.525863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.010 [2024-11-19 17:19:05.525912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.010 [2024-11-19 17:19:05.525925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.010 [2024-11-19 17:19:05.530839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.010 [2024-11-19 17:19:05.530914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.010 [2024-11-19 17:19:05.530928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.010 [2024-11-19 17:19:05.535911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.010 [2024-11-19 17:19:05.535975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.010 [2024-11-19 17:19:05.535988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.270 [2024-11-19 17:19:05.541090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.270 [2024-11-19 17:19:05.541168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.270 [2024-11-19 17:19:05.541193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.270 [2024-11-19 17:19:05.545909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.270 [2024-11-19 17:19:05.545956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.270 [2024-11-19 17:19:05.545969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.270 [2024-11-19 17:19:05.550311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.270 [2024-11-19 17:19:05.550341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.270 [2024-11-19 17:19:05.550352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.270 [2024-11-19 17:19:05.553232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.270 [2024-11-19 17:19:05.553262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.270 [2024-11-19 17:19:05.553273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.270 [2024-11-19 17:19:05.557777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.270 [2024-11-19 17:19:05.557823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.270 [2024-11-19 17:19:05.557834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.270 [2024-11-19 17:19:05.562191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.270 [2024-11-19 17:19:05.562222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.270 [2024-11-19 17:19:05.562233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.270 [2024-11-19 17:19:05.566215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.270 [2024-11-19 17:19:05.566246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.270 [2024-11-19 17:19:05.566257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.270 [2024-11-19 17:19:05.569064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.270 [2024-11-19 17:19:05.569095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.270 [2024-11-19 17:19:05.569105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.270 [2024-11-19 17:19:05.573428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.270 [2024-11-19 17:19:05.573459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.270 [2024-11-19 17:19:05.573469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.270 [2024-11-19 17:19:05.577316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.271 [2024-11-19 17:19:05.577347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.271 [2024-11-19 17:19:05.577358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.271 [2024-11-19 17:19:05.580695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.271 [2024-11-19 17:19:05.580725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.271 [2024-11-19 17:19:05.580748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.271 [2024-11-19 17:19:05.584717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.271 [2024-11-19 17:19:05.584759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.271 [2024-11-19 17:19:05.584770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.271 [2024-11-19 17:19:05.587396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.271 [2024-11-19 17:19:05.587426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.271 [2024-11-19 17:19:05.587436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.271 [2024-11-19 17:19:05.591254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.271 [2024-11-19 17:19:05.591285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.271 [2024-11-19 17:19:05.591295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.271 [2024-11-19 17:19:05.595721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.271 [2024-11-19 17:19:05.595777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.271 [2024-11-19 17:19:05.595789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.271 [2024-11-19 17:19:05.600005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.271 [2024-11-19 17:19:05.600059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.271 [2024-11-19 17:19:05.600071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.271 [2024-11-19 17:19:05.604465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.271 [2024-11-19 17:19:05.604497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.271 [2024-11-19 17:19:05.604508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.271 [2024-11-19 17:19:05.608303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.271 [2024-11-19 17:19:05.608347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.271 [2024-11-19 17:19:05.608358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.271 [2024-11-19 17:19:05.611078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.271 [2024-11-19 17:19:05.611107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.271 [2024-11-19 17:19:05.611118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.271 [2024-11-19 17:19:05.614937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.271 [2024-11-19 17:19:05.614968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.271 [2024-11-19 17:19:05.614979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.271 [2024-11-19 17:19:05.618630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.271 [2024-11-19 17:19:05.618678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.271 [2024-11-19 17:19:05.618689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.271 [2024-11-19 17:19:05.622349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.271 [2024-11-19 17:19:05.622381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.271 [2024-11-19 17:19:05.622391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.271 [2024-11-19 17:19:05.625422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.271 [2024-11-19 17:19:05.625453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.271 [2024-11-19 17:19:05.625463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.271 [2024-11-19 17:19:05.629702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.271 [2024-11-19 17:19:05.629747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.271 [2024-11-19 17:19:05.629762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.271 [2024-11-19 17:19:05.634097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.271 [2024-11-19 17:19:05.634145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.271 [2024-11-19 17:19:05.634172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.271 [2024-11-19 17:19:05.637136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.271 [2024-11-19 17:19:05.637192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.271 [2024-11-19 17:19:05.637206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.271 [2024-11-19 17:19:05.641721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.271 [2024-11-19 17:19:05.641793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.271 [2024-11-19 17:19:05.641821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.271 [2024-11-19 17:19:05.646371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.271 [2024-11-19 17:19:05.646419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.271 [2024-11-19 17:19:05.646431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.271 [2024-11-19 17:19:05.650706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.271 [2024-11-19 17:19:05.650761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.271 [2024-11-19 17:19:05.650774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.271 [2024-11-19 17:19:05.654875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.271 [2024-11-19 17:19:05.654907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.271 [2024-11-19 17:19:05.654919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.271 [2024-11-19 17:19:05.657962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.271 [2024-11-19 17:19:05.658007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.271 [2024-11-19 17:19:05.658018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.271 [2024-11-19 17:19:05.662418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.271 [2024-11-19 17:19:05.662464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.271 [2024-11-19 17:19:05.662475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.271 [2024-11-19 17:19:05.666973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.271 [2024-11-19 17:19:05.667007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.271 [2024-11-19 17:19:05.667019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.271 [2024-11-19 17:19:05.671012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.271 [2024-11-19 17:19:05.671045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.271 [2024-11-19 17:19:05.671056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.271 [2024-11-19 17:19:05.675062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.271 [2024-11-19 17:19:05.675095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.271 [2024-11-19 17:19:05.675107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.271 [2024-11-19 17:19:05.678296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.271 [2024-11-19 17:19:05.678342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.271 [2024-11-19 17:19:05.678354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.271 [2024-11-19 17:19:05.682792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.272 [2024-11-19 17:19:05.682838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.272 [2024-11-19 17:19:05.682850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.272 [2024-11-19 17:19:05.685817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.272 [2024-11-19 17:19:05.685863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.272 [2024-11-19 17:19:05.685875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.272 [2024-11-19 17:19:05.689943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.272 [2024-11-19 17:19:05.689990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.272 [2024-11-19 17:19:05.690001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.272 [2024-11-19 17:19:05.694223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.272 [2024-11-19 17:19:05.694269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.272 [2024-11-19 17:19:05.694282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.272 [2024-11-19 17:19:05.698726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.272 [2024-11-19 17:19:05.698782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.272 [2024-11-19 17:19:05.698794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.272 [2024-11-19 17:19:05.703243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.272 [2024-11-19 17:19:05.703290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.272 [2024-11-19 17:19:05.703302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.272 [2024-11-19 17:19:05.706195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.272 [2024-11-19 17:19:05.706241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.272 [2024-11-19 17:19:05.706252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.272 [2024-11-19 17:19:05.710628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.272 [2024-11-19 17:19:05.710676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.272 [2024-11-19 17:19:05.710688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.272 [2024-11-19 17:19:05.714892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.272 [2024-11-19 17:19:05.714941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.272 [2024-11-19 17:19:05.714953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.272 [2024-11-19 17:19:05.717994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.272 [2024-11-19 17:19:05.718042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.272 [2024-11-19 17:19:05.718054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.272 [2024-11-19 17:19:05.722799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.272 [2024-11-19 17:19:05.722861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.272 [2024-11-19 17:19:05.722873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.272 [2024-11-19 17:19:05.727407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.272 [2024-11-19 17:19:05.727453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.272 [2024-11-19 17:19:05.727465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.272 7699.00 IOPS, 962.38 MiB/s [2024-11-19T17:19:05.805Z] [2024-11-19 17:19:05.733187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.272 [2024-11-19 17:19:05.733234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.272 [2024-11-19 17:19:05.733246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.272 [2024-11-19 17:19:05.737662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.272 [2024-11-19 17:19:05.737708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.272 [2024-11-19 17:19:05.737720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.272 [2024-11-19 17:19:05.741641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.272 [2024-11-19 17:19:05.741687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.272 [2024-11-19 17:19:05.741698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.272 [2024-11-19 17:19:05.744823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.272 [2024-11-19 17:19:05.744885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.272 [2024-11-19 17:19:05.744898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.272 [2024-11-19 17:19:05.749078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.272 [2024-11-19 17:19:05.749110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.272 [2024-11-19 17:19:05.749121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.272 [2024-11-19 17:19:05.753540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.272 [2024-11-19 17:19:05.753586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.272 [2024-11-19 17:19:05.753597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.272 [2024-11-19 17:19:05.758000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.272 [2024-11-19 17:19:05.758046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.272 [2024-11-19 17:19:05.758057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.272 [2024-11-19 17:19:05.762388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.272 [2024-11-19 17:19:05.762434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.272 [2024-11-19 17:19:05.762446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.272 [2024-11-19 17:19:05.766957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.272 [2024-11-19 17:19:05.767003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.272 [2024-11-19 17:19:05.767015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.272 [2024-11-19 17:19:05.769690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.272 [2024-11-19 17:19:05.769721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.272 [2024-11-19 17:19:05.769732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.272 [2024-11-19 17:19:05.773825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.272 [2024-11-19 17:19:05.773855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.272 [2024-11-19 17:19:05.773866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.272 [2024-11-19 17:19:05.778467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.272 [2024-11-19 17:19:05.778498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.272 [2024-11-19 17:19:05.778509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.272 [2024-11-19 17:19:05.781433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.272 [2024-11-19 17:19:05.781463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.272 [2024-11-19 17:19:05.781475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.272 [2024-11-19 17:19:05.785399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.272 [2024-11-19 17:19:05.785430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.272 [2024-11-19 17:19:05.785440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.272 [2024-11-19 17:19:05.789389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.272 [2024-11-19 17:19:05.789420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.272 [2024-11-19 17:19:05.789431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.273 [2024-11-19 17:19:05.792938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.273 [2024-11-19 17:19:05.792969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.273 [2024-11-19 17:19:05.792980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.273 [2024-11-19 17:19:05.796583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.273 [2024-11-19 17:19:05.796630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.273 [2024-11-19 17:19:05.796641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.533 [2024-11-19 17:19:05.800145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.533 [2024-11-19 17:19:05.800194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.533 [2024-11-19 17:19:05.800207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.533 [2024-11-19 17:19:05.804408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.533 [2024-11-19 17:19:05.804465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.533 [2024-11-19 17:19:05.804477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.533 [2024-11-19 17:19:05.809044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.533 [2024-11-19 17:19:05.809074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.533 [2024-11-19 17:19:05.809086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.533 [2024-11-19 17:19:05.813223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.533 [2024-11-19 17:19:05.813253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.533 [2024-11-19 17:19:05.813264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.533 [2024-11-19 17:19:05.816182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.533 [2024-11-19 17:19:05.816229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.533 [2024-11-19 17:19:05.816241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.533 [2024-11-19 17:19:05.819904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.533 [2024-11-19 17:19:05.819935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.533 [2024-11-19 17:19:05.819945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.533 [2024-11-19 17:19:05.824258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.533 [2024-11-19 17:19:05.824294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.533 [2024-11-19 17:19:05.824306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.533 [2024-11-19 17:19:05.828484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.533 [2024-11-19 17:19:05.828515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.533 [2024-11-19 17:19:05.828527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.533 [2024-11-19 17:19:05.832772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.533 [2024-11-19 17:19:05.832827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.533 [2024-11-19 17:19:05.832839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.533 [2024-11-19 17:19:05.835638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.533 [2024-11-19 17:19:05.835668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.533 [2024-11-19 17:19:05.835678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.533 [2024-11-19 17:19:05.839818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.533 [2024-11-19 17:19:05.839863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.533 [2024-11-19 17:19:05.839874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.533 [2024-11-19 17:19:05.843777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.533 [2024-11-19 17:19:05.843824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.533 [2024-11-19 17:19:05.843835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.533 [2024-11-19 17:19:05.846716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.533 [2024-11-19 17:19:05.846777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.533 [2024-11-19 17:19:05.846790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.533 [2024-11-19 17:19:05.850758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.533 [2024-11-19 17:19:05.850788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.533 [2024-11-19 17:19:05.850799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.533 [2024-11-19 17:19:05.854866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.533 [2024-11-19 17:19:05.854912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.533 [2024-11-19 17:19:05.854924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.533 [2024-11-19 17:19:05.857940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.533 [2024-11-19 17:19:05.857970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.533 [2024-11-19 17:19:05.857981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.533 [2024-11-19 17:19:05.861989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.533 [2024-11-19 17:19:05.862035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.533 [2024-11-19 17:19:05.862046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.533 [2024-11-19 17:19:05.866444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.533 [2024-11-19 17:19:05.866491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.533 [2024-11-19 17:19:05.866502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.533 [2024-11-19 17:19:05.871012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.533 [2024-11-19 17:19:05.871045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.533 [2024-11-19 17:19:05.871057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.533 [2024-11-19 17:19:05.875227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.534 [2024-11-19 17:19:05.875273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.534 [2024-11-19 17:19:05.875284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.534 [2024-11-19 17:19:05.878248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.534 [2024-11-19 17:19:05.878294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.534 [2024-11-19 17:19:05.878305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.534 [2024-11-19 17:19:05.882168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.534 [2024-11-19 17:19:05.882215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.534 [2024-11-19 17:19:05.882226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.534 [2024-11-19 17:19:05.886689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.534 [2024-11-19 17:19:05.886735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.534 [2024-11-19 17:19:05.886747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.534 [2024-11-19 17:19:05.891056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.534 [2024-11-19 17:19:05.891089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.534 [2024-11-19 17:19:05.891100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.534 [2024-11-19 17:19:05.895857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.534 [2024-11-19 17:19:05.895916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.534 [2024-11-19 17:19:05.895930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.534 [2024-11-19 17:19:05.899624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.534 [2024-11-19 17:19:05.899675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.534 [2024-11-19 17:19:05.899688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.534 [2024-11-19 17:19:05.904128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.534 [2024-11-19 17:19:05.904164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.534 [2024-11-19 17:19:05.904178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.534 [2024-11-19 17:19:05.908807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.534 [2024-11-19 17:19:05.908865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.534 [2024-11-19 17:19:05.908878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.534 [2024-11-19 17:19:05.913477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.534 [2024-11-19 17:19:05.913525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.534 [2024-11-19 17:19:05.913536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.534 [2024-11-19 17:19:05.917808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.534 [2024-11-19 17:19:05.917854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.534 [2024-11-19 17:19:05.917866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.534 [2024-11-19 17:19:05.922025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.534 [2024-11-19 17:19:05.922073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.534 [2024-11-19 17:19:05.922085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.534 [2024-11-19 17:19:05.926586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.534 [2024-11-19 17:19:05.926632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.534 [2024-11-19 17:19:05.926644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.534 [2024-11-19 17:19:05.929822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.534 [2024-11-19 17:19:05.929867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.534 [2024-11-19 17:19:05.929893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.534 [2024-11-19 17:19:05.934235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.534 [2024-11-19 17:19:05.934268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.534 [2024-11-19 17:19:05.934279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.534 [2024-11-19 17:19:05.938785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.534 [2024-11-19 17:19:05.938862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.534 [2024-11-19 17:19:05.938874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.534 [2024-11-19 17:19:05.941947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.534 [2024-11-19 17:19:05.941992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.534 [2024-11-19 17:19:05.942004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.534 [2024-11-19 17:19:05.946547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.534 [2024-11-19 17:19:05.946594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.534 [2024-11-19 17:19:05.946605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.534 [2024-11-19 17:19:05.951031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.534 [2024-11-19 17:19:05.951081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.534 [2024-11-19 17:19:05.951094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.534 [2024-11-19 17:19:05.954372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.534 [2024-11-19 17:19:05.954418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.534 [2024-11-19 17:19:05.954429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.534 [2024-11-19 17:19:05.958557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.534 [2024-11-19 17:19:05.958604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.534 [2024-11-19 17:19:05.958616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.534 [2024-11-19 17:19:05.962687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.534 [2024-11-19 17:19:05.962734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.534 [2024-11-19 17:19:05.962760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.534 [2024-11-19 17:19:05.967067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.534 [2024-11-19 17:19:05.967102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.534 [2024-11-19 17:19:05.967115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.534 [2024-11-19 17:19:05.971307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.534 [2024-11-19 17:19:05.971353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.534 [2024-11-19 17:19:05.971364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.534 [2024-11-19 17:19:05.975841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.534 [2024-11-19 17:19:05.975888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.534 [2024-11-19 17:19:05.975899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.534 [2024-11-19 17:19:05.980080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.534 [2024-11-19 17:19:05.980113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.534 [2024-11-19 17:19:05.980125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.534 [2024-11-19 17:19:05.982718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.534 [2024-11-19 17:19:05.982788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.534 [2024-11-19 17:19:05.982801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.534 [2024-11-19 17:19:05.987086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.535 [2024-11-19 17:19:05.987134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.535 [2024-11-19 17:19:05.987161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.535 [2024-11-19 17:19:05.991280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.535 [2024-11-19 17:19:05.991327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.535 [2024-11-19 17:19:05.991338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.535 [2024-11-19 17:19:05.995001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.535 [2024-11-19 17:19:05.995048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.535 [2024-11-19 17:19:05.995060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.535 [2024-11-19 17:19:05.999196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.535 [2024-11-19 17:19:05.999262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.535 [2024-11-19 17:19:05.999275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.535 [2024-11-19 17:19:06.002882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.535 [2024-11-19 17:19:06.002930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.535 [2024-11-19 17:19:06.002942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.535 [2024-11-19 17:19:06.006099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.535 [2024-11-19 17:19:06.006145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.535 [2024-11-19 17:19:06.006156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.535 [2024-11-19 17:19:06.010520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.535 [2024-11-19 17:19:06.010565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.535 [2024-11-19 17:19:06.010576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.535 [2024-11-19 17:19:06.014951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.535 [2024-11-19 17:19:06.014999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.535 [2024-11-19 17:19:06.015010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.535 [2024-11-19 17:19:06.019241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.535 [2024-11-19 17:19:06.019288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.535 [2024-11-19 17:19:06.019299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.535 [2024-11-19 17:19:06.023599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.535 [2024-11-19 17:19:06.023645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.535 [2024-11-19 17:19:06.023656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.535 [2024-11-19 17:19:06.027862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.535 [2024-11-19 17:19:06.027907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.535 [2024-11-19 17:19:06.027919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.535 [2024-11-19 17:19:06.032629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.535 [2024-11-19 17:19:06.032665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.535 [2024-11-19 17:19:06.032678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.535 [2024-11-19 17:19:06.037413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.535 [2024-11-19 17:19:06.037461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.535 [2024-11-19 17:19:06.037473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.535 [2024-11-19 17:19:06.041840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.535 [2024-11-19 17:19:06.041887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.535 [2024-11-19 17:19:06.041898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.535 [2024-11-19 17:19:06.044852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.535 [2024-11-19 17:19:06.044898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.535 [2024-11-19 17:19:06.044909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.535 [2024-11-19 17:19:06.049383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.535 [2024-11-19 17:19:06.049431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.535 [2024-11-19 17:19:06.049442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.535 [2024-11-19 17:19:06.053390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.535 [2024-11-19 17:19:06.053438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.535 [2024-11-19 17:19:06.053450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.535 [2024-11-19 17:19:06.056984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.535 [2024-11-19 17:19:06.057031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.535 [2024-11-19 17:19:06.057043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.535 [2024-11-19 17:19:06.061521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.535 [2024-11-19 17:19:06.061570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.535 [2024-11-19 17:19:06.061583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.795 [2024-11-19 17:19:06.066534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.795 [2024-11-19 17:19:06.066581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.795 [2024-11-19 17:19:06.066593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.795 [2024-11-19 17:19:06.071620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.795 [2024-11-19 17:19:06.071668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.795 [2024-11-19 17:19:06.071680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.795 [2024-11-19 17:19:06.074526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.795 [2024-11-19 17:19:06.074572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.795 [2024-11-19 17:19:06.074584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.795 [2024-11-19 17:19:06.078539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.795 [2024-11-19 17:19:06.078585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.795 [2024-11-19 17:19:06.078596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.795 [2024-11-19 17:19:06.082837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.795 [2024-11-19 17:19:06.082884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.795 [2024-11-19 17:19:06.082896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.795 [2024-11-19 17:19:06.087353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.795 [2024-11-19 17:19:06.087400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.795 [2024-11-19 17:19:06.087411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.795 [2024-11-19 17:19:06.091828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.795 [2024-11-19 17:19:06.091873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.795 [2024-11-19 17:19:06.091885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.795 [2024-11-19 17:19:06.094845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.795 [2024-11-19 17:19:06.094890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.795 [2024-11-19 17:19:06.094903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.795 [2024-11-19 17:19:06.098887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.795 [2024-11-19 17:19:06.098935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.795 [2024-11-19 17:19:06.098947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.795 [2024-11-19 17:19:06.103618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.795 [2024-11-19 17:19:06.103667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.795 [2024-11-19 17:19:06.103678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.795 [2024-11-19 17:19:06.108074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.795 [2024-11-19 17:19:06.108123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.795 [2024-11-19 17:19:06.108136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.795 [2024-11-19 17:19:06.111148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.795 [2024-11-19 17:19:06.111194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.795 [2024-11-19 17:19:06.111205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.795 [2024-11-19 17:19:06.115360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.795 [2024-11-19 17:19:06.115407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.795 [2024-11-19 17:19:06.115418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.795 [2024-11-19 17:19:06.120194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.795 [2024-11-19 17:19:06.120229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.795 [2024-11-19 17:19:06.120241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.795 [2024-11-19 17:19:06.124885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.795 [2024-11-19 17:19:06.124934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.795 [2024-11-19 17:19:06.124946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.795 [2024-11-19 17:19:06.128916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.795 [2024-11-19 17:19:06.128962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.795 [2024-11-19 17:19:06.128973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.795 [2024-11-19 17:19:06.131993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.795 [2024-11-19 17:19:06.132064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.796 [2024-11-19 17:19:06.132094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.796 [2024-11-19 17:19:06.136934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.796 [2024-11-19 17:19:06.136983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.796 [2024-11-19 17:19:06.136995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.796 [2024-11-19 17:19:06.141306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.796 [2024-11-19 17:19:06.141353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.796 [2024-11-19 17:19:06.141365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.796 [2024-11-19 17:19:06.144366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.796 [2024-11-19 17:19:06.144430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.796 [2024-11-19 17:19:06.144442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.796 [2024-11-19 17:19:06.148681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.796 [2024-11-19 17:19:06.148729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.796 [2024-11-19 17:19:06.148741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.796 [2024-11-19 17:19:06.152420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.796 [2024-11-19 17:19:06.152468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.796 [2024-11-19 17:19:06.152480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.796 [2024-11-19 17:19:06.157447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.796 [2024-11-19 17:19:06.157510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.796 [2024-11-19 17:19:06.157523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.796 [2024-11-19 17:19:06.161438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.796 [2024-11-19 17:19:06.161485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.796 [2024-11-19 17:19:06.161496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.796 [2024-11-19 17:19:06.165529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.796 [2024-11-19 17:19:06.165576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.796 [2024-11-19 17:19:06.165587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.796 [2024-11-19 17:19:06.169247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.796 [2024-11-19 17:19:06.169293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.796 [2024-11-19 17:19:06.169305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.796 [2024-11-19 17:19:06.173374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.796 [2024-11-19 17:19:06.173422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.796 [2024-11-19 17:19:06.173449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.796 [2024-11-19 17:19:06.177259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.796 [2024-11-19 17:19:06.177306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.796 [2024-11-19 17:19:06.177318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.796 [2024-11-19 17:19:06.181216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.796 [2024-11-19 17:19:06.181262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.796 [2024-11-19 17:19:06.181274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.796 [2024-11-19 17:19:06.184432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.796 [2024-11-19 17:19:06.184479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.796 [2024-11-19 17:19:06.184507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.796 [2024-11-19 17:19:06.188260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.796 [2024-11-19 17:19:06.188308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.796 [2024-11-19 17:19:06.188320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.796 [2024-11-19 17:19:06.192746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.796 [2024-11-19 17:19:06.192802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.796 [2024-11-19 17:19:06.192815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.796 [2024-11-19 17:19:06.195870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.796 [2024-11-19 17:19:06.195915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.796 [2024-11-19 17:19:06.195926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.796 [2024-11-19 17:19:06.199803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.796 [2024-11-19 17:19:06.199849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.796 [2024-11-19 17:19:06.199861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.796 [2024-11-19 17:19:06.204560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.796 [2024-11-19 17:19:06.204607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.796 [2024-11-19 17:19:06.204619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.796 [2024-11-19 17:19:06.209331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.796 [2024-11-19 17:19:06.209364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.796 [2024-11-19 17:19:06.209376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.796 [2024-11-19 17:19:06.213587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.796 [2024-11-19 17:19:06.213634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.796 [2024-11-19 17:19:06.213645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.796 [2024-11-19 17:19:06.218027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.796 [2024-11-19 17:19:06.218075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.796 [2024-11-19 17:19:06.218086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.796 [2024-11-19 17:19:06.222517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.796 [2024-11-19 17:19:06.222563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.796 [2024-11-19 17:19:06.222575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.796 [2024-11-19 17:19:06.225656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.796 [2024-11-19 17:19:06.225702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.796 [2024-11-19 17:19:06.225713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.796 [2024-11-19 17:19:06.229826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.796 [2024-11-19 17:19:06.229873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.796 [2024-11-19 17:19:06.229885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.796 [2024-11-19 17:19:06.233888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.796 [2024-11-19 17:19:06.233934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.796 [2024-11-19 17:19:06.233946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.796 [2024-11-19 17:19:06.238061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.796 [2024-11-19 17:19:06.238109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.796 [2024-11-19 17:19:06.238121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.796 [2024-11-19 17:19:06.241490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.796 [2024-11-19 17:19:06.241536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.797 [2024-11-19 17:19:06.241548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.797 [2024-11-19 17:19:06.245642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.797 [2024-11-19 17:19:06.245690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.797 [2024-11-19 17:19:06.245702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.797 [2024-11-19 17:19:06.250111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.797 [2024-11-19 17:19:06.250144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.797 [2024-11-19 17:19:06.250155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.797 [2024-11-19 17:19:06.254864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.797 [2024-11-19 17:19:06.254912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.797 [2024-11-19 17:19:06.254924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.797 [2024-11-19 17:19:06.259388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.797 [2024-11-19 17:19:06.259435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.797 [2024-11-19 17:19:06.259447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.797 [2024-11-19 17:19:06.263558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.797 [2024-11-19 17:19:06.263604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.797 [2024-11-19 17:19:06.263616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.797 [2024-11-19 17:19:06.266353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.797 [2024-11-19 17:19:06.266400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.797 [2024-11-19 17:19:06.266412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.797 [2024-11-19 17:19:06.270905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.797 [2024-11-19 17:19:06.270953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.797 [2024-11-19 17:19:06.270965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.797 [2024-11-19 17:19:06.274431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.797 [2024-11-19 17:19:06.274478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.797 [2024-11-19 17:19:06.274490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.797 [2024-11-19 17:19:06.278085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.797 [2024-11-19 17:19:06.278133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.797 [2024-11-19 17:19:06.278161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.797 [2024-11-19 17:19:06.282313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.797 [2024-11-19 17:19:06.282361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.797 [2024-11-19 17:19:06.282372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.797 [2024-11-19 17:19:06.286660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.797 [2024-11-19 17:19:06.286691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.797 [2024-11-19 17:19:06.286703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.797 [2024-11-19 17:19:06.290990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.797 [2024-11-19 17:19:06.291040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.797 [2024-11-19 17:19:06.291052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.797 [2024-11-19 17:19:06.295765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.797 [2024-11-19 17:19:06.295839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.797 [2024-11-19 17:19:06.295852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.797 [2024-11-19 17:19:06.298915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.797 [2024-11-19 17:19:06.298963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.797 [2024-11-19 17:19:06.298975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.797 [2024-11-19 17:19:06.303203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.797 [2024-11-19 17:19:06.303251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.797 [2024-11-19 17:19:06.303263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.797 [2024-11-19 17:19:06.308108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.797 [2024-11-19 17:19:06.308144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.797 [2024-11-19 17:19:06.308158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.797 [2024-11-19 17:19:06.312834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.797 [2024-11-19 17:19:06.312893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.797 [2024-11-19 17:19:06.312908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.797 [2024-11-19 17:19:06.317400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.797 [2024-11-19 17:19:06.317448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.797 [2024-11-19 17:19:06.317459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.797 [2024-11-19 17:19:06.320939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:13.797 [2024-11-19 17:19:06.320989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.797 [2024-11-19 17:19:06.321017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:14.058 [2024-11-19 17:19:06.325944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.058 [2024-11-19 17:19:06.325994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.058 [2024-11-19 17:19:06.326007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:14.058 [2024-11-19 17:19:06.330513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.058 [2024-11-19 17:19:06.330562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.058 [2024-11-19 17:19:06.330574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:14.058 [2024-11-19 17:19:06.334416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.058 [2024-11-19 17:19:06.334463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.058 [2024-11-19 17:19:06.334475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:14.058 [2024-11-19 17:19:06.338462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.058 [2024-11-19 17:19:06.338510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.058 [2024-11-19 17:19:06.338522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:14.058 [2024-11-19 17:19:06.342689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.058 [2024-11-19 17:19:06.342738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.058 [2024-11-19 17:19:06.342750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:14.058 [2024-11-19 17:19:06.347319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.058 [2024-11-19 17:19:06.347367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.058 [2024-11-19 17:19:06.347379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:14.058 [2024-11-19 17:19:06.351691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.058 [2024-11-19 17:19:06.351738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.058 [2024-11-19 17:19:06.351761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:14.058 [2024-11-19 17:19:06.354550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.058 [2024-11-19 17:19:06.354596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.058 [2024-11-19 17:19:06.354607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:14.058 [2024-11-19 17:19:06.359268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.058 [2024-11-19 17:19:06.359316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.058 [2024-11-19 17:19:06.359328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:14.058 [2024-11-19 17:19:06.364165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.058 [2024-11-19 17:19:06.364202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.058 [2024-11-19 17:19:06.364215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:14.058 [2024-11-19 17:19:06.368676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.058 [2024-11-19 17:19:06.368725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.058 [2024-11-19 17:19:06.368737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:14.058 [2024-11-19 17:19:06.373503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.058 [2024-11-19 17:19:06.373569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.058 [2024-11-19 17:19:06.373582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:14.058 [2024-11-19 17:19:06.377690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.058 [2024-11-19 17:19:06.377736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.058 [2024-11-19 17:19:06.377763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:14.058 [2024-11-19 17:19:06.382210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.058 [2024-11-19 17:19:06.382259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.058 [2024-11-19 17:19:06.382271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:14.058 [2024-11-19 17:19:06.387072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.058 [2024-11-19 17:19:06.387106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.058 [2024-11-19 17:19:06.387118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:14.058 [2024-11-19 17:19:06.391661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.058 [2024-11-19 17:19:06.391709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.058 [2024-11-19 17:19:06.391721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:14.058 [2024-11-19 17:19:06.395896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.058 [2024-11-19 17:19:06.395942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.058 [2024-11-19 17:19:06.395953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:14.058 [2024-11-19 17:19:06.400690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.058 [2024-11-19 17:19:06.400738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.058 [2024-11-19 17:19:06.400778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:14.058 [2024-11-19 17:19:06.405561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.059 [2024-11-19 17:19:06.405609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.059 [2024-11-19 17:19:06.405621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:14.059 [2024-11-19 17:19:06.410589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.059 [2024-11-19 17:19:06.410637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.059 [2024-11-19 17:19:06.410650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:14.059 [2024-11-19 17:19:06.415218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.059 [2024-11-19 17:19:06.415266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.059 [2024-11-19 17:19:06.415279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:14.059 [2024-11-19 17:19:06.418318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.059 [2024-11-19 17:19:06.418366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.059 [2024-11-19 17:19:06.418378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:14.059 [2024-11-19 17:19:06.423541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.059 [2024-11-19 17:19:06.423575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.059 [2024-11-19 17:19:06.423587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:14.059 [2024-11-19 17:19:06.427285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.059 [2024-11-19 17:19:06.427334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.059 [2024-11-19 17:19:06.427346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:14.059 [2024-11-19 17:19:06.432221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.059 [2024-11-19 17:19:06.432263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.059 [2024-11-19 17:19:06.432276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:14.059 [2024-11-19 17:19:06.436601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.059 [2024-11-19 17:19:06.436650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.059 [2024-11-19 17:19:06.436662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:14.059 [2024-11-19 17:19:06.441082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.059 [2024-11-19 17:19:06.441117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.059 [2024-11-19 17:19:06.441129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:14.059 [2024-11-19 17:19:06.444669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.059 [2024-11-19 17:19:06.444702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.059 [2024-11-19 17:19:06.444715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:14.059 [2024-11-19 17:19:06.449084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.059 [2024-11-19 17:19:06.449134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.059 [2024-11-19 17:19:06.449147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:14.059 [2024-11-19 17:19:06.454090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.059 [2024-11-19 17:19:06.454141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.059 [2024-11-19 17:19:06.454169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:14.059 [2024-11-19 17:19:06.459029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.059 [2024-11-19 17:19:06.459067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.059 [2024-11-19 17:19:06.459081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:14.059 [2024-11-19 17:19:06.463991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.059 [2024-11-19 17:19:06.464069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.059 [2024-11-19 17:19:06.464083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:14.059 [2024-11-19 17:19:06.467059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.059 [2024-11-19 17:19:06.467095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.059 [2024-11-19 17:19:06.467108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:14.059 [2024-11-19 17:19:06.471495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.059 [2024-11-19 17:19:06.471546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.059 [2024-11-19 17:19:06.471558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:14.059 [2024-11-19 17:19:06.476517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.059 [2024-11-19 17:19:06.476566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.059 [2024-11-19 17:19:06.476579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:14.059 [2024-11-19 17:19:06.481563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.059 [2024-11-19 17:19:06.481613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.059 [2024-11-19 17:19:06.481625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:14.059 [2024-11-19 17:19:06.486125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.059 [2024-11-19 17:19:06.486174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.059 [2024-11-19 17:19:06.486201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:14.059 [2024-11-19 17:19:06.490714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.059 [2024-11-19 17:19:06.490776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.059 [2024-11-19 17:19:06.490789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:14.059 [2024-11-19 17:19:06.493688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.059 [2024-11-19 17:19:06.493735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.059 [2024-11-19 17:19:06.493763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:14.059 [2024-11-19 17:19:06.498716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.059 [2024-11-19 17:19:06.498779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.059 [2024-11-19 17:19:06.498792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:14.059 [2024-11-19 17:19:06.503690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.059 [2024-11-19 17:19:06.503738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.059 [2024-11-19 17:19:06.503761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:14.059 [2024-11-19 17:19:06.508688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.059 [2024-11-19 17:19:06.508736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.059 [2024-11-19 17:19:06.508749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:14.059 [2024-11-19 17:19:06.511596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.059 [2024-11-19 17:19:06.511642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.059 [2024-11-19 17:19:06.511653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:14.059 [2024-11-19 17:19:06.516131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.059 [2024-11-19 17:19:06.516167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.059 [2024-11-19 17:19:06.516180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:14.059 [2024-11-19 17:19:06.520654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.059 [2024-11-19 17:19:06.520702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.059 [2024-11-19 17:19:06.520714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:14.059 [2024-11-19 17:19:06.523378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.059 [2024-11-19 17:19:06.523425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.060 [2024-11-19 17:19:06.523436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:14.060 [2024-11-19 17:19:06.527894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.060 [2024-11-19 17:19:06.527943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.060 [2024-11-19 17:19:06.527955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:14.060 [2024-11-19 17:19:06.532093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.060 [2024-11-19 17:19:06.532126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.060 [2024-11-19 17:19:06.532138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:14.060 [2024-11-19 17:19:06.535508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.060 [2024-11-19 17:19:06.535542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.060 [2024-11-19 17:19:06.535554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:14.060 [2024-11-19 17:19:06.539613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.060 [2024-11-19 17:19:06.539661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.060 [2024-11-19 17:19:06.539672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:14.060 [2024-11-19 17:19:06.543737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.060 [2024-11-19 17:19:06.543813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.060 [2024-11-19 17:19:06.543826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:14.060 [2024-11-19 17:19:06.548529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.060 [2024-11-19 17:19:06.548579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.060 [2024-11-19 17:19:06.548591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:14.060 [2024-11-19 17:19:06.551917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.060 [2024-11-19 17:19:06.551965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.060 [2024-11-19 17:19:06.551978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:14.060 [2024-11-19 17:19:06.557536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.060 [2024-11-19 17:19:06.557586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.060 [2024-11-19 17:19:06.557597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:14.060 [2024-11-19 17:19:06.562387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.060 [2024-11-19 17:19:06.562435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.060 [2024-11-19 17:19:06.562446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:14.060 [2024-11-19 17:19:06.565564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.060 [2024-11-19 17:19:06.565611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.060 [2024-11-19 17:19:06.565622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:14.060 [2024-11-19 17:19:06.569984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.060 [2024-11-19 17:19:06.570033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.060 [2024-11-19 17:19:06.570046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:14.060 [2024-11-19 17:19:06.574790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.060 [2024-11-19 17:19:06.574855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.060 [2024-11-19 17:19:06.574868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:14.060 [2024-11-19 17:19:06.579327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.060 [2024-11-19 17:19:06.579374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.060 [2024-11-19 17:19:06.579386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:14.060 [2024-11-19 17:19:06.583718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.060 [2024-11-19 17:19:06.583781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.060 [2024-11-19 17:19:06.583794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:14.320 [2024-11-19 17:19:06.588473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.320 [2024-11-19 17:19:06.588521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.320 [2024-11-19 17:19:06.588533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:14.320 [2024-11-19 17:19:06.592984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.320 [2024-11-19 17:19:06.593031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.320 [2024-11-19 17:19:06.593043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:14.320 [2024-11-19 17:19:06.597613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.320 [2024-11-19 17:19:06.597660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.320 [2024-11-19 17:19:06.597671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:14.320 [2024-11-19 17:19:06.600560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.320 [2024-11-19 17:19:06.600606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.320 [2024-11-19 17:19:06.600618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:14.320 [2024-11-19 17:19:06.604590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.320 [2024-11-19 17:19:06.604638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.320 [2024-11-19 17:19:06.604649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:14.320 [2024-11-19 17:19:06.609222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.320 [2024-11-19 17:19:06.609271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.320 [2024-11-19 17:19:06.609283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:14.320 [2024-11-19 17:19:06.613547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.320 [2024-11-19 17:19:06.613595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.320 [2024-11-19 17:19:06.613606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:14.320 [2024-11-19 17:19:06.616728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.320 [2024-11-19 17:19:06.616784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.320 [2024-11-19 17:19:06.616796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:14.320 [2024-11-19 17:19:06.621155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.320 [2024-11-19 17:19:06.621202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.320 [2024-11-19 17:19:06.621213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:14.320 [2024-11-19 17:19:06.625464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.320 [2024-11-19 17:19:06.625510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.320 [2024-11-19 17:19:06.625522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:14.320 [2024-11-19 17:19:06.628448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.320 [2024-11-19 17:19:06.628508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.320 [2024-11-19 17:19:06.628520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:14.320 [2024-11-19 17:19:06.632350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.320 [2024-11-19 17:19:06.632427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.320 [2024-11-19 17:19:06.632439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:14.320 [2024-11-19 17:19:06.636628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.320 [2024-11-19 17:19:06.636675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.320 [2024-11-19 17:19:06.636686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:14.320 [2024-11-19 17:19:06.641117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.320 [2024-11-19 17:19:06.641163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.320 [2024-11-19 17:19:06.641175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:14.320 [2024-11-19 17:19:06.644285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.320 [2024-11-19 17:19:06.644334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.320 [2024-11-19 17:19:06.644346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:14.320 [2024-11-19 17:19:06.648837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.320 [2024-11-19 17:19:06.648884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.320 [2024-11-19 17:19:06.648895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:14.320 [2024-11-19 17:19:06.653182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.320 [2024-11-19 17:19:06.653230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.320 [2024-11-19 17:19:06.653241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:14.320 [2024-11-19 17:19:06.657339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.320 [2024-11-19 17:19:06.657386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.320 [2024-11-19 17:19:06.657397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:14.321 [2024-11-19 17:19:06.660759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.321 [2024-11-19 17:19:06.660815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.321 [2024-11-19 17:19:06.660827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:14.321 [2024-11-19 17:19:06.664626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.321 [2024-11-19 17:19:06.664673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.321 [2024-11-19 17:19:06.664685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:14.321 [2024-11-19 17:19:06.668290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.321 [2024-11-19 17:19:06.668338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.321 [2024-11-19 17:19:06.668350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:14.321 [2024-11-19 17:19:06.672689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.321 [2024-11-19 17:19:06.672735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.321 [2024-11-19 17:19:06.672746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:14.321 [2024-11-19 17:19:06.676768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.321 [2024-11-19 17:19:06.676824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.321 [2024-11-19 17:19:06.676836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:14.321 [2024-11-19 17:19:06.680285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.321 [2024-11-19 17:19:06.680318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.321 [2024-11-19 17:19:06.680332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:14.321 [2024-11-19 17:19:06.684677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.321 [2024-11-19 17:19:06.684723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.321 [2024-11-19 17:19:06.684734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:14.321 [2024-11-19 17:19:06.688985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.321 [2024-11-19 17:19:06.689031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.321 [2024-11-19 17:19:06.689044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:14.321 [2024-11-19 17:19:06.692158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.321 [2024-11-19 17:19:06.692206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.321 [2024-11-19 17:19:06.692217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:14.321 [2024-11-19 17:19:06.696459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.321 [2024-11-19 17:19:06.696523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.321 [2024-11-19 17:19:06.696535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:14.321 [2024-11-19 17:19:06.701336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.321 [2024-11-19 17:19:06.701383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.321 [2024-11-19 17:19:06.701394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:14.321 [2024-11-19 17:19:06.705419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.321 [2024-11-19 17:19:06.705466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.321 [2024-11-19 17:19:06.705477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:14.321 [2024-11-19 17:19:06.708428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.321 [2024-11-19 17:19:06.708475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.321 [2024-11-19 17:19:06.708487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:14.321 [2024-11-19 17:19:06.713072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.321 [2024-11-19 17:19:06.713116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.321 [2024-11-19 17:19:06.713127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:14.321 [2024-11-19 17:19:06.716982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.321 [2024-11-19 17:19:06.717028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.321 [2024-11-19 17:19:06.717039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:14.321 [2024-11-19 17:19:06.721221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.321 [2024-11-19 17:19:06.721268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.321 [2024-11-19 17:19:06.721280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:14.321 [2024-11-19 17:19:06.725802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.321 [2024-11-19 17:19:06.725848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.321 [2024-11-19 17:19:06.725860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:14.321 [2024-11-19 17:19:06.729092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf17880) 00:20:14.321 [2024-11-19 17:19:06.729137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.321 [2024-11-19 17:19:06.729149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:14.321 7579.50 IOPS, 947.44 MiB/s 00:20:14.321 Latency(us) 00:20:14.321 [2024-11-19T17:19:06.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.321 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:14.321 nvme0n1 : 2.00 7576.08 947.01 0.00 0.00 2108.26 528.76 11617.75 00:20:14.321 [2024-11-19T17:19:06.854Z] =================================================================================================================== 00:20:14.321 [2024-11-19T17:19:06.854Z] Total : 7576.08 947.01 0.00 0.00 2108.26 528.76 11617.75 00:20:14.321 { 00:20:14.321 "results": [ 00:20:14.321 { 00:20:14.321 "job": "nvme0n1", 00:20:14.321 "core_mask": "0x2", 00:20:14.321 "workload": "randread", 00:20:14.321 "status": "finished", 00:20:14.321 "queue_depth": 16, 00:20:14.321 "io_size": 131072, 00:20:14.321 "runtime": 2.003015, 00:20:14.321 "iops": 7576.07906081582, 00:20:14.321 "mibps": 947.0098826019776, 00:20:14.321 "io_failed": 0, 00:20:14.321 "io_timeout": 0, 00:20:14.321 "avg_latency_us": 2108.2623003145127, 00:20:14.321 "min_latency_us": 528.7563636363636, 00:20:14.321 "max_latency_us": 11617.745454545455 00:20:14.321 } 00:20:14.321 ], 00:20:14.321 "core_count": 1 00:20:14.321 } 00:20:14.321 17:19:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:14.321 17:19:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:14.321 17:19:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:14.321 | .driver_specific 00:20:14.321 | .nvme_error 00:20:14.321 | .status_code 00:20:14.321 | .command_transient_transport_error' 00:20:14.321 17:19:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:14.580 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 490 > 0 )) 00:20:14.580 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94621 00:20:14.580 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94621 ']' 00:20:14.580 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94621 00:20:14.580 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:14.580 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:14.580 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94621 00:20:14.580 killing process with pid 94621 00:20:14.580 Received shutdown signal, test time was about 2.000000 seconds 00:20:14.580 00:20:14.580 Latency(us) 00:20:14.580 [2024-11-19T17:19:07.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.580 [2024-11-19T17:19:07.113Z] =================================================================================================================== 00:20:14.580 [2024-11-19T17:19:07.113Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:14.580 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:14.580 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:14.580 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94621' 00:20:14.580 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94621 00:20:14.580 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94621 00:20:14.838 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:20:14.838 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:14.838 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:14.838 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:14.838 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:14.838 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:20:14.838 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94706 00:20:14.838 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94706 /var/tmp/bperf.sock 00:20:14.838 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94706 ']' 00:20:14.838 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:14.838 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.838 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:14.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:14.838 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.838 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:14.838 [2024-11-19 17:19:07.313595] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:20:14.838 [2024-11-19 17:19:07.313855] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94706 ] 00:20:15.096 [2024-11-19 17:19:07.451365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.096 [2024-11-19 17:19:07.499437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.096 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:15.096 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:15.096 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:15.096 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:15.662 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:15.662 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.662 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:15.662 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.662 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:15.662 17:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:15.920 nvme0n1 00:20:15.920 17:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:15.920 17:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.920 17:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:15.920 17:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.920 17:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:15.920 17:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:15.920 Running I/O for 2 seconds... 00:20:15.920 [2024-11-19 17:19:08.388158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e2c28 00:20:15.921 [2024-11-19 17:19:08.389035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.921 [2024-11-19 17:19:08.389110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:15.921 [2024-11-19 17:19:08.397370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f46d0 00:20:15.921 [2024-11-19 17:19:08.398256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.921 [2024-11-19 17:19:08.398304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:15.921 [2024-11-19 17:19:08.406060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f0ff8 00:20:15.921 [2024-11-19 17:19:08.406842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.921 [2024-11-19 17:19:08.406919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:15.921 [2024-11-19 17:19:08.416573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f0788 00:20:15.921 [2024-11-19 17:19:08.417644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.921 [2024-11-19 17:19:08.417693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:15.921 [2024-11-19 17:19:08.424469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e5220 00:20:15.921 [2024-11-19 17:19:08.425210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.921 [2024-11-19 17:19:08.425258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:15.921 [2024-11-19 17:19:08.433746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e8d30 00:20:15.921 [2024-11-19 17:19:08.434489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.921 [2024-11-19 17:19:08.434539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:15.921 [2024-11-19 17:19:08.445120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e84c0 00:20:15.921 [2024-11-19 17:19:08.446563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.921 [2024-11-19 17:19:08.446610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:16.186 [2024-11-19 17:19:08.452278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f1430 00:20:16.186 [2024-11-19 17:19:08.453036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.186 [2024-11-19 17:19:08.453085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:16.186 [2024-11-19 17:19:08.464147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e5a90 00:20:16.186 [2024-11-19 17:19:08.465376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.186 [2024-11-19 17:19:08.465438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:16.186 [2024-11-19 17:19:08.473987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e1f80 00:20:16.186 [2024-11-19 17:19:08.476257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.186 [2024-11-19 17:19:08.476313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:16.186 [2024-11-19 17:19:08.483648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e1710 00:20:16.186 [2024-11-19 17:19:08.484348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.186 [2024-11-19 17:19:08.484421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:16.186 [2024-11-19 17:19:08.495864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166eea00 00:20:16.186 [2024-11-19 17:19:08.497312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.186 [2024-11-19 17:19:08.497361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:16.186 [2024-11-19 17:19:08.502967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e7818 00:20:16.186 [2024-11-19 17:19:08.503672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.186 [2024-11-19 17:19:08.503720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:16.186 [2024-11-19 17:19:08.514852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f8a50 00:20:16.187 [2024-11-19 17:19:08.515907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.187 [2024-11-19 17:19:08.515962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:16.187 [2024-11-19 17:19:08.523885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ebfd0 00:20:16.187 [2024-11-19 17:19:08.525174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.187 [2024-11-19 17:19:08.525221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:16.187 [2024-11-19 17:19:08.533385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e4140 00:20:16.187 [2024-11-19 17:19:08.534198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.187 [2024-11-19 17:19:08.534262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:16.187 [2024-11-19 17:19:08.544915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e8088 00:20:16.187 [2024-11-19 17:19:08.546215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.187 [2024-11-19 17:19:08.546262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:16.187 [2024-11-19 17:19:08.551877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f0ff8 00:20:16.187 [2024-11-19 17:19:08.552488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.187 [2024-11-19 17:19:08.552537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:16.187 [2024-11-19 17:19:08.563433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f6890 00:20:16.187 [2024-11-19 17:19:08.564537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.187 [2024-11-19 17:19:08.564583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:16.187 [2024-11-19 17:19:08.572335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166fc560 00:20:16.187 [2024-11-19 17:19:08.573591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.187 [2024-11-19 17:19:08.573637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:16.187 [2024-11-19 17:19:08.581639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e27f0 00:20:16.187 [2024-11-19 17:19:08.582556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.187 [2024-11-19 17:19:08.582600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:16.187 [2024-11-19 17:19:08.593577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ea248 00:20:16.187 [2024-11-19 17:19:08.594939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.187 [2024-11-19 17:19:08.594974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:16.187 [2024-11-19 17:19:08.600638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166fb8b8 00:20:16.187 [2024-11-19 17:19:08.601296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.187 [2024-11-19 17:19:08.601359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:16.187 [2024-11-19 17:19:08.613161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f46d0 00:20:16.187 [2024-11-19 17:19:08.614304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.187 [2024-11-19 17:19:08.614351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:16.187 [2024-11-19 17:19:08.622404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166df550 00:20:16.187 [2024-11-19 17:19:08.623718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.187 [2024-11-19 17:19:08.623777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:16.187 [2024-11-19 17:19:08.631728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e5a90 00:20:16.187 [2024-11-19 17:19:08.632555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.187 [2024-11-19 17:19:08.632632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:16.187 [2024-11-19 17:19:08.640887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e5658 00:20:16.187 [2024-11-19 17:19:08.641670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.187 [2024-11-19 17:19:08.641734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:16.187 [2024-11-19 17:19:08.649663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f57b0 00:20:16.187 [2024-11-19 17:19:08.650426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.187 [2024-11-19 17:19:08.650474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:16.187 [2024-11-19 17:19:08.661025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f6020 00:20:16.187 [2024-11-19 17:19:08.662430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.187 [2024-11-19 17:19:08.662478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:16.187 [2024-11-19 17:19:08.667913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166dece0 00:20:16.187 [2024-11-19 17:19:08.668632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.187 [2024-11-19 17:19:08.668680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:16.187 [2024-11-19 17:19:08.680213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ebb98 00:20:16.187 [2024-11-19 17:19:08.681628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.187 [2024-11-19 17:19:08.681675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:16.187 [2024-11-19 17:19:08.687119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ed4e8 00:20:16.187 [2024-11-19 17:19:08.687837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.187 [2024-11-19 17:19:08.687892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:16.187 [2024-11-19 17:19:08.698781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166eee38 00:20:16.187 [2024-11-19 17:19:08.699974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.187 [2024-11-19 17:19:08.700020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:16.187 [2024-11-19 17:19:08.707706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f8a50 00:20:16.187 [2024-11-19 17:19:08.709534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.187 [2024-11-19 17:19:08.709570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:16.538 [2024-11-19 17:19:08.717996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ddc00 00:20:16.538 [2024-11-19 17:19:08.718993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.538 [2024-11-19 17:19:08.719027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:16.538 [2024-11-19 17:19:08.727421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f2948 00:20:16.538 [2024-11-19 17:19:08.728654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.538 [2024-11-19 17:19:08.728703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:16.538 [2024-11-19 17:19:08.737145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ea680 00:20:16.538 [2024-11-19 17:19:08.738136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.538 [2024-11-19 17:19:08.738198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:16.538 [2024-11-19 17:19:08.746212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ec840 00:20:16.538 [2024-11-19 17:19:08.747704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.538 [2024-11-19 17:19:08.747789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:16.538 [2024-11-19 17:19:08.755451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e8d30 00:20:16.538 [2024-11-19 17:19:08.756518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.538 [2024-11-19 17:19:08.756564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:16.538 [2024-11-19 17:19:08.767721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ef270 00:20:16.538 [2024-11-19 17:19:08.769319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.538 [2024-11-19 17:19:08.769367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:16.538 [2024-11-19 17:19:08.775039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f9f68 00:20:16.538 [2024-11-19 17:19:08.775695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.538 [2024-11-19 17:19:08.775781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:16.538 [2024-11-19 17:19:08.787083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e5658 00:20:16.538 [2024-11-19 17:19:08.788243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.538 [2024-11-19 17:19:08.788293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:16.538 [2024-11-19 17:19:08.796018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f7538 00:20:16.538 [2024-11-19 17:19:08.797576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.538 [2024-11-19 17:19:08.797624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:16.538 [2024-11-19 17:19:08.805629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f35f0 00:20:16.538 [2024-11-19 17:19:08.806589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.538 [2024-11-19 17:19:08.806636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:16.538 [2024-11-19 17:19:08.817456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ecc78 00:20:16.538 [2024-11-19 17:19:08.818747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.538 [2024-11-19 17:19:08.818802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:16.538 [2024-11-19 17:19:08.826325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f9f68 00:20:16.538 [2024-11-19 17:19:08.827749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.538 [2024-11-19 17:19:08.827806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:16.538 [2024-11-19 17:19:08.835955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e88f8 00:20:16.538 [2024-11-19 17:19:08.836923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.538 [2024-11-19 17:19:08.836971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:16.538 [2024-11-19 17:19:08.847989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166dece0 00:20:16.538 [2024-11-19 17:19:08.849451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.538 [2024-11-19 17:19:08.849498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:16.538 [2024-11-19 17:19:08.854951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f0788 00:20:16.538 [2024-11-19 17:19:08.855681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.538 [2024-11-19 17:19:08.855731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:16.538 [2024-11-19 17:19:08.866673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166de038 00:20:16.538 [2024-11-19 17:19:08.867905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.538 [2024-11-19 17:19:08.867954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:16.538 [2024-11-19 17:19:08.875280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f2d80 00:20:16.538 [2024-11-19 17:19:08.876558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.538 [2024-11-19 17:19:08.876607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:16.538 [2024-11-19 17:19:08.885919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166eaef0 00:20:16.538 [2024-11-19 17:19:08.887145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.538 [2024-11-19 17:19:08.887192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:16.538 [2024-11-19 17:19:08.894972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166fd640 00:20:16.538 [2024-11-19 17:19:08.896500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.538 [2024-11-19 17:19:08.896547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:16.538 [2024-11-19 17:19:08.904653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ea680 00:20:16.538 [2024-11-19 17:19:08.905693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.538 [2024-11-19 17:19:08.905770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:16.538 [2024-11-19 17:19:08.916388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ec840 00:20:16.538 [2024-11-19 17:19:08.917787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.538 [2024-11-19 17:19:08.917843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:16.538 [2024-11-19 17:19:08.923351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ec840 00:20:16.538 [2024-11-19 17:19:08.924071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.538 [2024-11-19 17:19:08.924120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:16.539 [2024-11-19 17:19:08.935044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f4298 00:20:16.539 [2024-11-19 17:19:08.936246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.539 [2024-11-19 17:19:08.936296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:16.539 [2024-11-19 17:19:08.944991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f8618 00:20:16.539 [2024-11-19 17:19:08.946110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.539 [2024-11-19 17:19:08.946158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:16.539 [2024-11-19 17:19:08.953686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f96f8 00:20:16.539 [2024-11-19 17:19:08.954712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.539 [2024-11-19 17:19:08.954768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:16.539 [2024-11-19 17:19:08.962178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e7c50 00:20:16.539 [2024-11-19 17:19:08.962894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.539 [2024-11-19 17:19:08.962941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:16.539 [2024-11-19 17:19:08.973953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166dece0 00:20:16.539 [2024-11-19 17:19:08.975169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.539 [2024-11-19 17:19:08.975217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:16.539 [2024-11-19 17:19:08.982832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166de038 00:20:16.539 [2024-11-19 17:19:08.983646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.539 [2024-11-19 17:19:08.983693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:16.539 [2024-11-19 17:19:08.991591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e4de8 00:20:16.539 [2024-11-19 17:19:08.992868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.539 [2024-11-19 17:19:08.992914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:16.539 [2024-11-19 17:19:09.000758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ef270 00:20:16.539 [2024-11-19 17:19:09.001560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.539 [2024-11-19 17:19:09.001597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:16.539 [2024-11-19 17:19:09.011264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ecc78 00:20:16.539 [2024-11-19 17:19:09.012130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.539 [2024-11-19 17:19:09.012167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:16.539 [2024-11-19 17:19:09.019923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f2d80 00:20:16.539 [2024-11-19 17:19:09.020784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.539 [2024-11-19 17:19:09.020834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:16.539 [2024-11-19 17:19:09.032248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ef270 00:20:16.539 [2024-11-19 17:19:09.033447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.539 [2024-11-19 17:19:09.033497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:16.539 [2024-11-19 17:19:09.040906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ee5c8 00:20:16.798 [2024-11-19 17:19:09.042258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.798 [2024-11-19 17:19:09.042308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:16.798 [2024-11-19 17:19:09.051702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e0630 00:20:16.798 [2024-11-19 17:19:09.052934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.798 [2024-11-19 17:19:09.052981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:16.798 [2024-11-19 17:19:09.060613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f81e0 00:20:16.798 [2024-11-19 17:19:09.062080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.798 [2024-11-19 17:19:09.062128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:16.798 [2024-11-19 17:19:09.070783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f31b8 00:20:16.798 [2024-11-19 17:19:09.072007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.798 [2024-11-19 17:19:09.072078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:16.798 [2024-11-19 17:19:09.079912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ee190 00:20:16.798 [2024-11-19 17:19:09.081359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.798 [2024-11-19 17:19:09.081406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:16.798 [2024-11-19 17:19:09.089217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f9b30 00:20:16.798 [2024-11-19 17:19:09.090210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.798 [2024-11-19 17:19:09.090255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:16.798 [2024-11-19 17:19:09.100638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e4140 00:20:16.798 [2024-11-19 17:19:09.102149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.798 [2024-11-19 17:19:09.102195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:16.798 [2024-11-19 17:19:09.107592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e1b48 00:20:16.798 [2024-11-19 17:19:09.108430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.798 [2024-11-19 17:19:09.108466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:16.798 [2024-11-19 17:19:09.119890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166fd640 00:20:16.798 [2024-11-19 17:19:09.121393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.798 [2024-11-19 17:19:09.121439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:16.798 [2024-11-19 17:19:09.126679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166fc560 00:20:16.798 [2024-11-19 17:19:09.127466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.798 [2024-11-19 17:19:09.127513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:16.798 [2024-11-19 17:19:09.138992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166feb58 00:20:16.798 [2024-11-19 17:19:09.140505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.798 [2024-11-19 17:19:09.140550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:16.798 [2024-11-19 17:19:09.145915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f8618 00:20:16.798 [2024-11-19 17:19:09.146705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.798 [2024-11-19 17:19:09.146776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:16.798 [2024-11-19 17:19:09.158445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166fc560 00:20:16.798 [2024-11-19 17:19:09.159837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.798 [2024-11-19 17:19:09.159885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:16.798 [2024-11-19 17:19:09.166999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e5220 00:20:16.798 [2024-11-19 17:19:09.168202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.798 [2024-11-19 17:19:09.168252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:16.798 [2024-11-19 17:19:09.177382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166feb58 00:20:16.798 [2024-11-19 17:19:09.178642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.798 [2024-11-19 17:19:09.178688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:16.798 [2024-11-19 17:19:09.186310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e4140 00:20:16.798 [2024-11-19 17:19:09.187716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.798 [2024-11-19 17:19:09.187790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:16.798 [2024-11-19 17:19:09.195649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e3060 00:20:16.798 [2024-11-19 17:19:09.196704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.798 [2024-11-19 17:19:09.196771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:16.798 [2024-11-19 17:19:09.204544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f4b08 00:20:16.798 [2024-11-19 17:19:09.205795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.798 [2024-11-19 17:19:09.205854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:16.798 [2024-11-19 17:19:09.213849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166de470 00:20:16.798 [2024-11-19 17:19:09.214669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.798 [2024-11-19 17:19:09.214716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:16.798 [2024-11-19 17:19:09.223522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f0788 00:20:16.798 [2024-11-19 17:19:09.224097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.798 [2024-11-19 17:19:09.224133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:16.798 [2024-11-19 17:19:09.233500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166fa3a0 00:20:16.798 [2024-11-19 17:19:09.234577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.798 [2024-11-19 17:19:09.234622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:16.798 [2024-11-19 17:19:09.242533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166eea00 00:20:16.798 [2024-11-19 17:19:09.244129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.798 [2024-11-19 17:19:09.244166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:16.798 [2024-11-19 17:19:09.252651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f92c0 00:20:16.799 [2024-11-19 17:19:09.253539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.799 [2024-11-19 17:19:09.253585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:16.799 [2024-11-19 17:19:09.262207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f2948 00:20:16.799 [2024-11-19 17:19:09.263316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.799 [2024-11-19 17:19:09.263363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:16.799 [2024-11-19 17:19:09.273802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f5be8 00:20:16.799 [2024-11-19 17:19:09.275260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.799 [2024-11-19 17:19:09.275306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:16.799 [2024-11-19 17:19:09.280675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f0bc0 00:20:16.799 [2024-11-19 17:19:09.281428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.799 [2024-11-19 17:19:09.281475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:16.799 [2024-11-19 17:19:09.292156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166fb8b8 00:20:16.799 [2024-11-19 17:19:09.293398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.799 [2024-11-19 17:19:09.293445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:16.799 [2024-11-19 17:19:09.301002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166eb328 00:20:16.799 [2024-11-19 17:19:09.302455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.799 [2024-11-19 17:19:09.302503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:16.799 [2024-11-19 17:19:09.310355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166fda78 00:20:16.799 [2024-11-19 17:19:09.311380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.799 [2024-11-19 17:19:09.311424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:16.799 [2024-11-19 17:19:09.320003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e99d8 00:20:16.799 [2024-11-19 17:19:09.320826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.799 [2024-11-19 17:19:09.320883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:16.799 [2024-11-19 17:19:09.328710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f4f40 00:20:17.057 [2024-11-19 17:19:09.329614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.057 [2024-11-19 17:19:09.329663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:17.057 [2024-11-19 17:19:09.340641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f0350 00:20:17.057 [2024-11-19 17:19:09.342056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.057 [2024-11-19 17:19:09.342103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:17.057 [2024-11-19 17:19:09.347573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e8088 00:20:17.057 [2024-11-19 17:19:09.348292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.057 [2024-11-19 17:19:09.348355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:17.057 [2024-11-19 17:19:09.359146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f9b30 00:20:17.057 [2024-11-19 17:19:09.360327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.057 [2024-11-19 17:19:09.360391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:17.057 [2024-11-19 17:19:09.368006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ebb98 00:20:17.057 [2024-11-19 17:19:09.369329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.057 [2024-11-19 17:19:09.369377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:17.057 [2024-11-19 17:19:09.377287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ee190 00:20:17.057 [2024-11-19 17:19:09.379098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.057 [2024-11-19 17:19:09.379146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:17.058 26176.00 IOPS, 102.25 MiB/s [2024-11-19T17:19:09.591Z] [2024-11-19 17:19:09.389812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f8a50 00:20:17.058 [2024-11-19 17:19:09.391042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.058 [2024-11-19 17:19:09.391103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:17.058 [2024-11-19 17:19:09.398477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f7970 00:20:17.058 [2024-11-19 17:19:09.399588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.058 [2024-11-19 17:19:09.399635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:17.058 [2024-11-19 17:19:09.409209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f3e60 00:20:17.058 [2024-11-19 17:19:09.410444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.058 [2024-11-19 17:19:09.410491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:17.058 [2024-11-19 17:19:09.418544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e1f80 00:20:17.058 [2024-11-19 17:19:09.420076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.058 [2024-11-19 17:19:09.420128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:17.058 [2024-11-19 17:19:09.428646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ec408 00:20:17.058 [2024-11-19 17:19:09.429669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.058 [2024-11-19 17:19:09.429717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:17.058 [2024-11-19 17:19:09.438996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166df550 00:20:17.058 [2024-11-19 17:19:09.439992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.058 [2024-11-19 17:19:09.440095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:17.058 [2024-11-19 17:19:09.447589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f31b8 00:20:17.058 [2024-11-19 17:19:09.448484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.058 [2024-11-19 17:19:09.448545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:17.058 [2024-11-19 17:19:09.459731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166dece0 00:20:17.058 [2024-11-19 17:19:09.460935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.058 [2024-11-19 17:19:09.460982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:17.058 [2024-11-19 17:19:09.468685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f8e88 00:20:17.058 [2024-11-19 17:19:09.470163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.058 [2024-11-19 17:19:09.470209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:17.058 [2024-11-19 17:19:09.478172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e88f8 00:20:17.058 [2024-11-19 17:19:09.479090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.058 [2024-11-19 17:19:09.479136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:17.058 [2024-11-19 17:19:09.489964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166edd58 00:20:17.058 [2024-11-19 17:19:09.491373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.058 [2024-11-19 17:19:09.491419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:17.058 [2024-11-19 17:19:09.497706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166eff18 00:20:17.058 [2024-11-19 17:19:09.498685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.058 [2024-11-19 17:19:09.498732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.058 [2024-11-19 17:19:09.510163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166de470 00:20:17.058 [2024-11-19 17:19:09.511688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.058 [2024-11-19 17:19:09.511761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:17.058 [2024-11-19 17:19:09.517322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166fa7d8 00:20:17.058 [2024-11-19 17:19:09.518035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.058 [2024-11-19 17:19:09.518082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:17.058 [2024-11-19 17:19:09.529563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ef270 00:20:17.058 [2024-11-19 17:19:09.530628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.058 [2024-11-19 17:19:09.530674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:17.058 [2024-11-19 17:19:09.538734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f3a28 00:20:17.058 [2024-11-19 17:19:09.540122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.058 [2024-11-19 17:19:09.540159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:17.058 [2024-11-19 17:19:09.548358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f7100 00:20:17.058 [2024-11-19 17:19:09.549175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.058 [2024-11-19 17:19:09.549236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:17.058 [2024-11-19 17:19:09.559925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166de038 00:20:17.058 [2024-11-19 17:19:09.561318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.058 [2024-11-19 17:19:09.561365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:17.058 [2024-11-19 17:19:09.566823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166eff18 00:20:17.058 [2024-11-19 17:19:09.567446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.058 [2024-11-19 17:19:09.567522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:17.058 [2024-11-19 17:19:09.578540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166eb328 00:20:17.058 [2024-11-19 17:19:09.579663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.058 [2024-11-19 17:19:09.579709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:17.058 [2024-11-19 17:19:09.587846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f6890 00:20:17.317 [2024-11-19 17:19:09.589543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.317 [2024-11-19 17:19:09.589591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:17.317 [2024-11-19 17:19:09.597531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f8a50 00:20:17.317 [2024-11-19 17:19:09.598413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:37 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.317 [2024-11-19 17:19:09.598461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:17.317 [2024-11-19 17:19:09.609183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e27f0 00:20:17.317 [2024-11-19 17:19:09.610577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.317 [2024-11-19 17:19:09.610625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:17.317 [2024-11-19 17:19:09.618524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e01f8 00:20:17.317 [2024-11-19 17:19:09.620459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.317 [2024-11-19 17:19:09.620523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:17.317 [2024-11-19 17:19:09.628838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e27f0 00:20:17.317 [2024-11-19 17:19:09.629891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.317 [2024-11-19 17:19:09.629927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:17.317 [2024-11-19 17:19:09.638227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166eaef0 00:20:17.317 [2024-11-19 17:19:09.639497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.317 [2024-11-19 17:19:09.639543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:17.317 [2024-11-19 17:19:09.645396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f8e88 00:20:17.317 [2024-11-19 17:19:09.645984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.317 [2024-11-19 17:19:09.646021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:17.317 [2024-11-19 17:19:09.657179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f7970 00:20:17.317 [2024-11-19 17:19:09.658252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.317 [2024-11-19 17:19:09.658298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:17.317 [2024-11-19 17:19:09.666081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f0350 00:20:17.317 [2024-11-19 17:19:09.667317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.317 [2024-11-19 17:19:09.667363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:17.317 [2024-11-19 17:19:09.675337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e9e10 00:20:17.317 [2024-11-19 17:19:09.676217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.317 [2024-11-19 17:19:09.676265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:17.317 [2024-11-19 17:19:09.687095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f0bc0 00:20:17.317 [2024-11-19 17:19:09.688479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.317 [2024-11-19 17:19:09.688556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:17.317 [2024-11-19 17:19:09.694289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166fa7d8 00:20:17.317 [2024-11-19 17:19:09.694972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.317 [2024-11-19 17:19:09.695019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:17.317 [2024-11-19 17:19:09.705923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e6738 00:20:17.317 [2024-11-19 17:19:09.706934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.317 [2024-11-19 17:19:09.706979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:17.317 [2024-11-19 17:19:09.717372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166dece0 00:20:17.317 [2024-11-19 17:19:09.718881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.317 [2024-11-19 17:19:09.718929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:17.317 [2024-11-19 17:19:09.724235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166de038 00:20:17.317 [2024-11-19 17:19:09.725027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.317 [2024-11-19 17:19:09.725059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:17.317 [2024-11-19 17:19:09.735684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e12d8 00:20:17.317 [2024-11-19 17:19:09.736995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.317 [2024-11-19 17:19:09.737041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:17.317 [2024-11-19 17:19:09.742535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166eaef0 00:20:17.318 [2024-11-19 17:19:09.743139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.318 [2024-11-19 17:19:09.743186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:17.318 [2024-11-19 17:19:09.754120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f6020 00:20:17.318 [2024-11-19 17:19:09.755188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.318 [2024-11-19 17:19:09.755234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:17.318 [2024-11-19 17:19:09.763041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f0788 00:20:17.318 [2024-11-19 17:19:09.764353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.318 [2024-11-19 17:19:09.764401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:17.318 [2024-11-19 17:19:09.773734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ed0b0 00:20:17.318 [2024-11-19 17:19:09.774957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.318 [2024-11-19 17:19:09.775005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:17.318 [2024-11-19 17:19:09.782488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f4298 00:20:17.318 [2024-11-19 17:19:09.783781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.318 [2024-11-19 17:19:09.783836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:17.318 [2024-11-19 17:19:09.791825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ea680 00:20:17.318 [2024-11-19 17:19:09.792793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.318 [2024-11-19 17:19:09.792861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:17.318 [2024-11-19 17:19:09.803349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166fc128 00:20:17.318 [2024-11-19 17:19:09.804820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.318 [2024-11-19 17:19:09.804866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:17.318 [2024-11-19 17:19:09.810203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166fb8b8 00:20:17.318 [2024-11-19 17:19:09.810943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.318 [2024-11-19 17:19:09.810989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:17.318 [2024-11-19 17:19:09.821714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e99d8 00:20:17.318 [2024-11-19 17:19:09.822963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.318 [2024-11-19 17:19:09.823010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:17.318 [2024-11-19 17:19:09.830614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e01f8 00:20:17.318 [2024-11-19 17:19:09.832056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.318 [2024-11-19 17:19:09.832104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:17.318 [2024-11-19 17:19:09.839760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f92c0 00:20:17.318 [2024-11-19 17:19:09.840767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.318 [2024-11-19 17:19:09.840811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:17.577 [2024-11-19 17:19:09.851557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166fac10 00:20:17.577 [2024-11-19 17:19:09.853102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.577 [2024-11-19 17:19:09.853163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:17.577 [2024-11-19 17:19:09.858476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f81e0 00:20:17.577 [2024-11-19 17:19:09.859280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.577 [2024-11-19 17:19:09.859328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:17.577 [2024-11-19 17:19:09.869951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166fc128 00:20:17.577 [2024-11-19 17:19:09.871239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.577 [2024-11-19 17:19:09.871286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:17.577 [2024-11-19 17:19:09.876811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f8618 00:20:17.577 [2024-11-19 17:19:09.877398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.577 [2024-11-19 17:19:09.877430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:17.577 [2024-11-19 17:19:09.888341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e4578 00:20:17.577 [2024-11-19 17:19:09.889432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.577 [2024-11-19 17:19:09.889477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:17.577 [2024-11-19 17:19:09.897230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e27f0 00:20:17.577 [2024-11-19 17:19:09.898461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.577 [2024-11-19 17:19:09.898508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:17.577 [2024-11-19 17:19:09.906501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f7100 00:20:17.577 [2024-11-19 17:19:09.907361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.577 [2024-11-19 17:19:09.907409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:17.577 [2024-11-19 17:19:09.917944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166fef90 00:20:17.577 [2024-11-19 17:19:09.919296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.577 [2024-11-19 17:19:09.919342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:17.577 [2024-11-19 17:19:09.924811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ee190 00:20:17.577 [2024-11-19 17:19:09.925432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.577 [2024-11-19 17:19:09.925479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:17.577 [2024-11-19 17:19:09.936246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f6020 00:20:17.577 [2024-11-19 17:19:09.937383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.577 [2024-11-19 17:19:09.937429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:17.577 [2024-11-19 17:19:09.945157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ed0b0 00:20:17.577 [2024-11-19 17:19:09.946498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.577 [2024-11-19 17:19:09.946546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:17.577 [2024-11-19 17:19:09.954479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e0a68 00:20:17.577 [2024-11-19 17:19:09.955407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.577 [2024-11-19 17:19:09.955452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:17.577 [2024-11-19 17:19:09.965905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166fb8b8 00:20:17.577 [2024-11-19 17:19:09.967314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.577 [2024-11-19 17:19:09.967360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:17.577 [2024-11-19 17:19:09.972678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f6cc8 00:20:17.577 [2024-11-19 17:19:09.973380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.577 [2024-11-19 17:19:09.973426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:17.577 [2024-11-19 17:19:09.984521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e6738 00:20:17.577 [2024-11-19 17:19:09.985716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.577 [2024-11-19 17:19:09.985785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:17.577 [2024-11-19 17:19:09.993725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166eaef0 00:20:17.577 [2024-11-19 17:19:09.995613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.577 [2024-11-19 17:19:09.995662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:17.577 [2024-11-19 17:19:10.002648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e5a90 00:20:17.577 [2024-11-19 17:19:10.003495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.577 [2024-11-19 17:19:10.003534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:17.577 [2024-11-19 17:19:10.014245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f4298 00:20:17.577 [2024-11-19 17:19:10.015199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.577 [2024-11-19 17:19:10.015247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:17.577 [2024-11-19 17:19:10.024866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166de8a8 00:20:17.577 [2024-11-19 17:19:10.025846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.577 [2024-11-19 17:19:10.025896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:17.577 [2024-11-19 17:19:10.036512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f7970 00:20:17.577 [2024-11-19 17:19:10.037398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.577 [2024-11-19 17:19:10.037445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:17.577 [2024-11-19 17:19:10.048599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f0bc0 00:20:17.577 [2024-11-19 17:19:10.049946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.577 [2024-11-19 17:19:10.049995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:17.577 [2024-11-19 17:19:10.055681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e6b70 00:20:17.577 [2024-11-19 17:19:10.056398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.577 [2024-11-19 17:19:10.056447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:17.577 [2024-11-19 17:19:10.068715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f3a28 00:20:17.577 [2024-11-19 17:19:10.070164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.577 [2024-11-19 17:19:10.070211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:17.577 [2024-11-19 17:19:10.076404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ed920 00:20:17.577 [2024-11-19 17:19:10.077275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.577 [2024-11-19 17:19:10.077337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:17.577 [2024-11-19 17:19:10.088488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166fdeb0 00:20:17.577 [2024-11-19 17:19:10.089836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.577 [2024-11-19 17:19:10.089875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:17.577 [2024-11-19 17:19:10.095619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e7818 00:20:17.577 [2024-11-19 17:19:10.096313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.577 [2024-11-19 17:19:10.096378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:17.577 [2024-11-19 17:19:10.107362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f35f0 00:20:17.836 [2024-11-19 17:19:10.108431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.836 [2024-11-19 17:19:10.108480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:17.836 [2024-11-19 17:19:10.116791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e38d0 00:20:17.836 [2024-11-19 17:19:10.117917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.836 [2024-11-19 17:19:10.117964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:17.836 [2024-11-19 17:19:10.126157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166fac10 00:20:17.836 [2024-11-19 17:19:10.127447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.836 [2024-11-19 17:19:10.127496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:17.836 [2024-11-19 17:19:10.134645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f2948 00:20:17.836 [2024-11-19 17:19:10.135438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.836 [2024-11-19 17:19:10.135488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:17.836 [2024-11-19 17:19:10.143858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f7100 00:20:17.836 [2024-11-19 17:19:10.144633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.836 [2024-11-19 17:19:10.144684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:17.836 [2024-11-19 17:19:10.154321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f4f40 00:20:17.836 [2024-11-19 17:19:10.155466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.836 [2024-11-19 17:19:10.155513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:17.836 [2024-11-19 17:19:10.161382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e1f80 00:20:17.836 [2024-11-19 17:19:10.162089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.836 [2024-11-19 17:19:10.162137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:17.836 [2024-11-19 17:19:10.173783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ee5c8 00:20:17.836 [2024-11-19 17:19:10.175294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.836 [2024-11-19 17:19:10.175341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:17.836 [2024-11-19 17:19:10.180805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166df988 00:20:17.836 [2024-11-19 17:19:10.181490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.836 [2024-11-19 17:19:10.181537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:17.836 [2024-11-19 17:19:10.192867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ebfd0 00:20:17.836 [2024-11-19 17:19:10.194008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.836 [2024-11-19 17:19:10.194055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:17.836 [2024-11-19 17:19:10.201956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f7100 00:20:17.836 [2024-11-19 17:19:10.203273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.836 [2024-11-19 17:19:10.203319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:17.836 [2024-11-19 17:19:10.211219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ebfd0 00:20:17.836 [2024-11-19 17:19:10.212158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.836 [2024-11-19 17:19:10.212206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:17.836 [2024-11-19 17:19:10.222756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166df988 00:20:17.836 [2024-11-19 17:19:10.224186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.836 [2024-11-19 17:19:10.224235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:17.836 [2024-11-19 17:19:10.229627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e4578 00:20:17.836 [2024-11-19 17:19:10.230352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.836 [2024-11-19 17:19:10.230399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:17.836 [2024-11-19 17:19:10.241169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f7970 00:20:17.836 [2024-11-19 17:19:10.242390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.836 [2024-11-19 17:19:10.242436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:17.836 [2024-11-19 17:19:10.250814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166eea00 00:20:17.836 [2024-11-19 17:19:10.252161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.836 [2024-11-19 17:19:10.252210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:17.836 [2024-11-19 17:19:10.257777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ec408 00:20:17.836 [2024-11-19 17:19:10.258387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.836 [2024-11-19 17:19:10.258448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:17.836 [2024-11-19 17:19:10.269299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f9b30 00:20:17.836 [2024-11-19 17:19:10.270404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.836 [2024-11-19 17:19:10.270449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:17.836 [2024-11-19 17:19:10.278197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166fcdd0 00:20:17.836 [2024-11-19 17:19:10.279538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.836 [2024-11-19 17:19:10.279586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:17.836 [2024-11-19 17:19:10.288940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166e3d08 00:20:17.836 [2024-11-19 17:19:10.290187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.836 [2024-11-19 17:19:10.290232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:17.836 [2024-11-19 17:19:10.295683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166df118 00:20:17.836 [2024-11-19 17:19:10.296332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.836 [2024-11-19 17:19:10.296426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:17.836 [2024-11-19 17:19:10.307341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f92c0 00:20:17.836 [2024-11-19 17:19:10.308487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.836 [2024-11-19 17:19:10.308534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:17.836 [2024-11-19 17:19:10.316232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ef6a8 00:20:17.836 [2024-11-19 17:19:10.317525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.836 [2024-11-19 17:19:10.317571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:17.836 [2024-11-19 17:19:10.325619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f4298 00:20:17.836 [2024-11-19 17:19:10.326544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.836 [2024-11-19 17:19:10.326590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:17.837 [2024-11-19 17:19:10.337277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166ea680 00:20:17.837 [2024-11-19 17:19:10.338673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.837 [2024-11-19 17:19:10.338719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:17.837 [2024-11-19 17:19:10.344196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166fe2e8 00:20:17.837 [2024-11-19 17:19:10.344891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.837 [2024-11-19 17:19:10.344946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:17.837 [2024-11-19 17:19:10.355749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166fb480 00:20:17.837 [2024-11-19 17:19:10.356966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.837 [2024-11-19 17:19:10.357010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:17.837 [2024-11-19 17:19:10.364242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166f2d80 00:20:17.837 [2024-11-19 17:19:10.365933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.837 [2024-11-19 17:19:10.365981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:18.095 [2024-11-19 17:19:10.373802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5be40) with pdu=0x2000166fa3a0 00:20:18.095 [2024-11-19 17:19:10.374599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.095 [2024-11-19 17:19:10.374645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:18.095 26164.50 IOPS, 102.21 MiB/s 00:20:18.095 Latency(us) 00:20:18.095 [2024-11-19T17:19:10.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.095 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:18.095 nvme0n1 : 2.00 26172.27 102.24 0.00 0.00 4884.05 1966.08 14000.87 00:20:18.095 [2024-11-19T17:19:10.628Z] =================================================================================================================== 00:20:18.095 [2024-11-19T17:19:10.628Z] Total : 26172.27 102.24 0.00 0.00 4884.05 1966.08 14000.87 00:20:18.095 { 00:20:18.095 "results": [ 00:20:18.095 { 00:20:18.095 "job": "nvme0n1", 00:20:18.095 "core_mask": "0x2", 00:20:18.095 "workload": "randwrite", 00:20:18.095 "status": "finished", 00:20:18.095 "queue_depth": 128, 00:20:18.095 "io_size": 4096, 00:20:18.095 "runtime": 2.004297, 00:20:18.095 "iops": 26172.268880310654, 00:20:18.095 "mibps": 102.23542531371349, 00:20:18.095 "io_failed": 0, 00:20:18.095 "io_timeout": 0, 00:20:18.095 "avg_latency_us": 4884.053116682581, 00:20:18.095 "min_latency_us": 1966.08, 00:20:18.095 "max_latency_us": 14000.872727272726 00:20:18.095 } 00:20:18.095 ], 00:20:18.095 "core_count": 1 00:20:18.095 } 00:20:18.095 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:18.095 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:18.095 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:18.095 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:18.095 | .driver_specific 00:20:18.095 | .nvme_error 00:20:18.095 | .status_code 00:20:18.095 | .command_transient_transport_error' 00:20:18.353 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 205 > 0 )) 00:20:18.353 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94706 00:20:18.353 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94706 ']' 00:20:18.353 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94706 00:20:18.353 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:18.353 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:18.353 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94706 00:20:18.353 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:18.353 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:18.353 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94706' 00:20:18.353 killing process with pid 94706 00:20:18.353 Received shutdown signal, test time was about 2.000000 seconds 00:20:18.353 00:20:18.353 Latency(us) 00:20:18.353 [2024-11-19T17:19:10.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.353 [2024-11-19T17:19:10.886Z] =================================================================================================================== 00:20:18.353 [2024-11-19T17:19:10.886Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:18.353 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94706 00:20:18.353 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94706 00:20:18.612 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:20:18.612 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:18.612 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:18.612 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:18.612 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:18.612 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94783 00:20:18.612 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:20:18.612 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94783 /var/tmp/bperf.sock 00:20:18.612 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94783 ']' 00:20:18.612 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:18.612 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:18.612 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:18.612 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.612 17:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:18.612 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:18.612 Zero copy mechanism will not be used. 00:20:18.612 [2024-11-19 17:19:10.966621] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:20:18.612 [2024-11-19 17:19:10.966732] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94783 ] 00:20:18.612 [2024-11-19 17:19:11.102361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.871 [2024-11-19 17:19:11.147806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.437 17:19:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.437 17:19:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:19.437 17:19:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:19.437 17:19:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:20.003 17:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:20.003 17:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.003 17:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:20.003 17:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.003 17:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:20.003 17:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:20.261 nvme0n1 00:20:20.262 17:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:20.262 17:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.262 17:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:20.262 17:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.262 17:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:20.262 17:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:20.262 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:20.262 Zero copy mechanism will not be used. 00:20:20.262 Running I/O for 2 seconds... 00:20:20.262 [2024-11-19 17:19:12.745980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.262 [2024-11-19 17:19:12.746096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.262 [2024-11-19 17:19:12.746131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.262 [2024-11-19 17:19:12.750683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.262 [2024-11-19 17:19:12.750798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.262 [2024-11-19 17:19:12.750822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.262 [2024-11-19 17:19:12.755038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.262 [2024-11-19 17:19:12.755136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.262 [2024-11-19 17:19:12.755159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.262 [2024-11-19 17:19:12.759218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.262 [2024-11-19 17:19:12.759291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.262 [2024-11-19 17:19:12.759312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.262 [2024-11-19 17:19:12.763362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.262 [2024-11-19 17:19:12.763441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.262 [2024-11-19 17:19:12.763463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.262 [2024-11-19 17:19:12.767405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.262 [2024-11-19 17:19:12.767483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.262 [2024-11-19 17:19:12.767504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.262 [2024-11-19 17:19:12.771472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.262 [2024-11-19 17:19:12.771551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.262 [2024-11-19 17:19:12.771573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.262 [2024-11-19 17:19:12.775572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.262 [2024-11-19 17:19:12.775650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.262 [2024-11-19 17:19:12.775672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.262 [2024-11-19 17:19:12.779627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.262 [2024-11-19 17:19:12.779708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.262 [2024-11-19 17:19:12.779730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.262 [2024-11-19 17:19:12.783758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.262 [2024-11-19 17:19:12.783833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.262 [2024-11-19 17:19:12.783854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.262 [2024-11-19 17:19:12.787866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.262 [2024-11-19 17:19:12.787977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.262 [2024-11-19 17:19:12.787998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.262 [2024-11-19 17:19:12.791915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.262 [2024-11-19 17:19:12.791988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.262 [2024-11-19 17:19:12.792009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.522 [2024-11-19 17:19:12.795939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.522 [2024-11-19 17:19:12.796127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.522 [2024-11-19 17:19:12.796150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.522 [2024-11-19 17:19:12.800008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.522 [2024-11-19 17:19:12.800101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.522 [2024-11-19 17:19:12.800122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.522 [2024-11-19 17:19:12.803988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.522 [2024-11-19 17:19:12.804109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.522 [2024-11-19 17:19:12.804132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.522 [2024-11-19 17:19:12.807935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.522 [2024-11-19 17:19:12.808007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.522 [2024-11-19 17:19:12.808039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.522 [2024-11-19 17:19:12.811947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.522 [2024-11-19 17:19:12.812035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.522 [2024-11-19 17:19:12.812074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.522 [2024-11-19 17:19:12.816008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.522 [2024-11-19 17:19:12.816120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.522 [2024-11-19 17:19:12.816141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.522 [2024-11-19 17:19:12.820105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.522 [2024-11-19 17:19:12.820180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.522 [2024-11-19 17:19:12.820202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.522 [2024-11-19 17:19:12.824120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.522 [2024-11-19 17:19:12.824196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.522 [2024-11-19 17:19:12.824219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.522 [2024-11-19 17:19:12.828215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.522 [2024-11-19 17:19:12.828299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.522 [2024-11-19 17:19:12.828321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.522 [2024-11-19 17:19:12.832220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.522 [2024-11-19 17:19:12.832309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.522 [2024-11-19 17:19:12.832330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.522 [2024-11-19 17:19:12.836285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.522 [2024-11-19 17:19:12.836358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.522 [2024-11-19 17:19:12.836379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.522 [2024-11-19 17:19:12.840221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.522 [2024-11-19 17:19:12.840308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.522 [2024-11-19 17:19:12.840329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.522 [2024-11-19 17:19:12.844267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.522 [2024-11-19 17:19:12.844340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.522 [2024-11-19 17:19:12.844361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.522 [2024-11-19 17:19:12.848232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.522 [2024-11-19 17:19:12.848304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.522 [2024-11-19 17:19:12.848325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.522 [2024-11-19 17:19:12.852327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.523 [2024-11-19 17:19:12.852402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.523 [2024-11-19 17:19:12.852426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.523 [2024-11-19 17:19:12.856375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.523 [2024-11-19 17:19:12.856456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.523 [2024-11-19 17:19:12.856478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.523 [2024-11-19 17:19:12.860419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.523 [2024-11-19 17:19:12.860497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.523 [2024-11-19 17:19:12.860518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.523 [2024-11-19 17:19:12.864372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.523 [2024-11-19 17:19:12.864458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.523 [2024-11-19 17:19:12.864480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.523 [2024-11-19 17:19:12.868398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.523 [2024-11-19 17:19:12.868486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.523 [2024-11-19 17:19:12.868507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.523 [2024-11-19 17:19:12.872376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.523 [2024-11-19 17:19:12.872449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.523 [2024-11-19 17:19:12.872471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.523 [2024-11-19 17:19:12.876382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.523 [2024-11-19 17:19:12.876455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.523 [2024-11-19 17:19:12.876476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.523 [2024-11-19 17:19:12.880343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.523 [2024-11-19 17:19:12.880415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.523 [2024-11-19 17:19:12.880436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.523 [2024-11-19 17:19:12.884366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.523 [2024-11-19 17:19:12.884441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.523 [2024-11-19 17:19:12.884463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.523 [2024-11-19 17:19:12.888323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.523 [2024-11-19 17:19:12.888393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.523 [2024-11-19 17:19:12.888415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.523 [2024-11-19 17:19:12.892356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.523 [2024-11-19 17:19:12.892440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.523 [2024-11-19 17:19:12.892461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.523 [2024-11-19 17:19:12.896466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.523 [2024-11-19 17:19:12.896538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.523 [2024-11-19 17:19:12.896559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.523 [2024-11-19 17:19:12.900552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.523 [2024-11-19 17:19:12.900634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.523 [2024-11-19 17:19:12.900656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.523 [2024-11-19 17:19:12.904590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.523 [2024-11-19 17:19:12.904662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.523 [2024-11-19 17:19:12.904684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.523 [2024-11-19 17:19:12.908627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.523 [2024-11-19 17:19:12.908698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.523 [2024-11-19 17:19:12.908719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.523 [2024-11-19 17:19:12.912686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.523 [2024-11-19 17:19:12.912757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.523 [2024-11-19 17:19:12.912794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.523 [2024-11-19 17:19:12.916686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.523 [2024-11-19 17:19:12.916771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.523 [2024-11-19 17:19:12.916793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.523 [2024-11-19 17:19:12.920674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.523 [2024-11-19 17:19:12.920760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.523 [2024-11-19 17:19:12.920782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.523 [2024-11-19 17:19:12.924663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.523 [2024-11-19 17:19:12.924733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.524 [2024-11-19 17:19:12.924770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.524 [2024-11-19 17:19:12.928600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.524 [2024-11-19 17:19:12.928672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.524 [2024-11-19 17:19:12.928693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.524 [2024-11-19 17:19:12.932593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.524 [2024-11-19 17:19:12.932665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.524 [2024-11-19 17:19:12.932685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.524 [2024-11-19 17:19:12.936584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.524 [2024-11-19 17:19:12.936658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.524 [2024-11-19 17:19:12.936679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.524 [2024-11-19 17:19:12.940599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.524 [2024-11-19 17:19:12.940671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.524 [2024-11-19 17:19:12.940692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.524 [2024-11-19 17:19:12.944564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.524 [2024-11-19 17:19:12.944637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.524 [2024-11-19 17:19:12.944658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.524 [2024-11-19 17:19:12.948661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.524 [2024-11-19 17:19:12.948732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.524 [2024-11-19 17:19:12.948768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.524 [2024-11-19 17:19:12.952572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.524 [2024-11-19 17:19:12.952644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.524 [2024-11-19 17:19:12.952665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.524 [2024-11-19 17:19:12.956613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.524 [2024-11-19 17:19:12.956685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.524 [2024-11-19 17:19:12.956706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.524 [2024-11-19 17:19:12.960565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.524 [2024-11-19 17:19:12.960637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.524 [2024-11-19 17:19:12.960657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.524 [2024-11-19 17:19:12.964655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.524 [2024-11-19 17:19:12.964728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.524 [2024-11-19 17:19:12.964766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.524 [2024-11-19 17:19:12.968712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.524 [2024-11-19 17:19:12.968821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.524 [2024-11-19 17:19:12.968843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.524 [2024-11-19 17:19:12.972731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.524 [2024-11-19 17:19:12.972818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.524 [2024-11-19 17:19:12.972839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.524 [2024-11-19 17:19:12.976834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.524 [2024-11-19 17:19:12.976920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.524 [2024-11-19 17:19:12.976942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.524 [2024-11-19 17:19:12.980941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.524 [2024-11-19 17:19:12.981028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.524 [2024-11-19 17:19:12.981050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.524 [2024-11-19 17:19:12.985096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.524 [2024-11-19 17:19:12.985168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.524 [2024-11-19 17:19:12.985189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.524 [2024-11-19 17:19:12.989199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.524 [2024-11-19 17:19:12.989287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.524 [2024-11-19 17:19:12.989308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.524 [2024-11-19 17:19:12.993344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.524 [2024-11-19 17:19:12.993425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.524 [2024-11-19 17:19:12.993447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.524 [2024-11-19 17:19:12.997401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.524 [2024-11-19 17:19:12.997476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.524 [2024-11-19 17:19:12.997497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.524 [2024-11-19 17:19:13.001508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.525 [2024-11-19 17:19:13.001582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.525 [2024-11-19 17:19:13.001604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.525 [2024-11-19 17:19:13.005633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.525 [2024-11-19 17:19:13.005715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.525 [2024-11-19 17:19:13.005755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.525 [2024-11-19 17:19:13.009765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.525 [2024-11-19 17:19:13.009840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.525 [2024-11-19 17:19:13.009862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.525 [2024-11-19 17:19:13.013906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.525 [2024-11-19 17:19:13.013980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.525 [2024-11-19 17:19:13.014002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.525 [2024-11-19 17:19:13.018044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.525 [2024-11-19 17:19:13.018117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.525 [2024-11-19 17:19:13.018139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.525 [2024-11-19 17:19:13.022198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.525 [2024-11-19 17:19:13.022312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.525 [2024-11-19 17:19:13.022336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.525 [2024-11-19 17:19:13.026288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.525 [2024-11-19 17:19:13.026380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.525 [2024-11-19 17:19:13.026403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.525 [2024-11-19 17:19:13.030492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.525 [2024-11-19 17:19:13.030584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.525 [2024-11-19 17:19:13.030606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.525 [2024-11-19 17:19:13.034669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.525 [2024-11-19 17:19:13.034772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.525 [2024-11-19 17:19:13.034795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.525 [2024-11-19 17:19:13.038826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.525 [2024-11-19 17:19:13.038901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.525 [2024-11-19 17:19:13.038923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.525 [2024-11-19 17:19:13.042990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.525 [2024-11-19 17:19:13.043064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.525 [2024-11-19 17:19:13.043086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.525 [2024-11-19 17:19:13.047144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.525 [2024-11-19 17:19:13.047225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.525 [2024-11-19 17:19:13.047247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.525 [2024-11-19 17:19:13.051303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.525 [2024-11-19 17:19:13.051385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.525 [2024-11-19 17:19:13.051406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.787 [2024-11-19 17:19:13.055417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.787 [2024-11-19 17:19:13.055489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.787 [2024-11-19 17:19:13.055510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.787 [2024-11-19 17:19:13.059620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.787 [2024-11-19 17:19:13.059711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.787 [2024-11-19 17:19:13.059733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.787 [2024-11-19 17:19:13.063783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.787 [2024-11-19 17:19:13.063871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.787 [2024-11-19 17:19:13.063892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.787 [2024-11-19 17:19:13.068008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.787 [2024-11-19 17:19:13.068095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.787 [2024-11-19 17:19:13.068116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.787 [2024-11-19 17:19:13.072195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.787 [2024-11-19 17:19:13.072271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.787 [2024-11-19 17:19:13.072294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.787 [2024-11-19 17:19:13.076392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.787 [2024-11-19 17:19:13.076478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.787 [2024-11-19 17:19:13.076500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.787 [2024-11-19 17:19:13.080642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.787 [2024-11-19 17:19:13.080718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.787 [2024-11-19 17:19:13.080740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.787 [2024-11-19 17:19:13.084832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.787 [2024-11-19 17:19:13.084910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.787 [2024-11-19 17:19:13.084931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.787 [2024-11-19 17:19:13.089001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.787 [2024-11-19 17:19:13.089077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.787 [2024-11-19 17:19:13.089099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.787 [2024-11-19 17:19:13.093160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.787 [2024-11-19 17:19:13.093236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.787 [2024-11-19 17:19:13.093257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.787 [2024-11-19 17:19:13.097289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.787 [2024-11-19 17:19:13.097364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.787 [2024-11-19 17:19:13.097386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.787 [2024-11-19 17:19:13.101443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.787 [2024-11-19 17:19:13.101518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.787 [2024-11-19 17:19:13.101540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.787 [2024-11-19 17:19:13.105566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.787 [2024-11-19 17:19:13.105635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.787 [2024-11-19 17:19:13.105657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.787 [2024-11-19 17:19:13.109721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.787 [2024-11-19 17:19:13.109823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.787 [2024-11-19 17:19:13.109846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.787 [2024-11-19 17:19:13.113891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.787 [2024-11-19 17:19:13.113978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.787 [2024-11-19 17:19:13.113999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.787 [2024-11-19 17:19:13.118054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.787 [2024-11-19 17:19:13.118138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.787 [2024-11-19 17:19:13.118160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.787 [2024-11-19 17:19:13.122202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.787 [2024-11-19 17:19:13.122277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.787 [2024-11-19 17:19:13.122299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.787 [2024-11-19 17:19:13.126337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.787 [2024-11-19 17:19:13.126413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.787 [2024-11-19 17:19:13.126434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.787 [2024-11-19 17:19:13.130509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.787 [2024-11-19 17:19:13.130578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.787 [2024-11-19 17:19:13.130599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.787 [2024-11-19 17:19:13.134651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.787 [2024-11-19 17:19:13.134753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.787 [2024-11-19 17:19:13.134776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.787 [2024-11-19 17:19:13.138823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.787 [2024-11-19 17:19:13.138889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.787 [2024-11-19 17:19:13.138911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.787 [2024-11-19 17:19:13.142993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.787 [2024-11-19 17:19:13.143068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.787 [2024-11-19 17:19:13.143090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.787 [2024-11-19 17:19:13.147232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.787 [2024-11-19 17:19:13.147313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.787 [2024-11-19 17:19:13.147335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.787 [2024-11-19 17:19:13.151293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.787 [2024-11-19 17:19:13.151379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.787 [2024-11-19 17:19:13.151400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.788 [2024-11-19 17:19:13.155337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.788 [2024-11-19 17:19:13.155424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.788 [2024-11-19 17:19:13.155445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.788 [2024-11-19 17:19:13.159351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.788 [2024-11-19 17:19:13.159424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.788 [2024-11-19 17:19:13.159445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.788 [2024-11-19 17:19:13.163345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.788 [2024-11-19 17:19:13.163418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.788 [2024-11-19 17:19:13.163439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.788 [2024-11-19 17:19:13.167367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.788 [2024-11-19 17:19:13.167440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.788 [2024-11-19 17:19:13.167461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.788 [2024-11-19 17:19:13.171304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.788 [2024-11-19 17:19:13.171382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.788 [2024-11-19 17:19:13.171403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.788 [2024-11-19 17:19:13.175287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.788 [2024-11-19 17:19:13.175359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.788 [2024-11-19 17:19:13.175380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.788 [2024-11-19 17:19:13.179250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.788 [2024-11-19 17:19:13.179326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.788 [2024-11-19 17:19:13.179347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.788 [2024-11-19 17:19:13.183276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.788 [2024-11-19 17:19:13.183350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.788 [2024-11-19 17:19:13.183371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.788 [2024-11-19 17:19:13.187261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.788 [2024-11-19 17:19:13.187342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.788 [2024-11-19 17:19:13.187363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.788 [2024-11-19 17:19:13.191256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.788 [2024-11-19 17:19:13.191329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.788 [2024-11-19 17:19:13.191350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.788 [2024-11-19 17:19:13.195230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.788 [2024-11-19 17:19:13.195306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.788 [2024-11-19 17:19:13.195327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.788 [2024-11-19 17:19:13.199210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.788 [2024-11-19 17:19:13.199292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.788 [2024-11-19 17:19:13.199313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.788 [2024-11-19 17:19:13.203184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.788 [2024-11-19 17:19:13.203260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.788 [2024-11-19 17:19:13.203281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.788 [2024-11-19 17:19:13.207186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.788 [2024-11-19 17:19:13.207263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.788 [2024-11-19 17:19:13.207284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.788 [2024-11-19 17:19:13.211165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.788 [2024-11-19 17:19:13.211239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.788 [2024-11-19 17:19:13.211260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.788 [2024-11-19 17:19:13.215153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.788 [2024-11-19 17:19:13.215252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.788 [2024-11-19 17:19:13.215275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.788 [2024-11-19 17:19:13.219095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.788 [2024-11-19 17:19:13.219179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.788 [2024-11-19 17:19:13.219201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.788 [2024-11-19 17:19:13.223081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.788 [2024-11-19 17:19:13.223184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.788 [2024-11-19 17:19:13.223206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.788 [2024-11-19 17:19:13.227022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.788 [2024-11-19 17:19:13.227095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.788 [2024-11-19 17:19:13.227116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.788 [2024-11-19 17:19:13.231005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.788 [2024-11-19 17:19:13.231096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.788 [2024-11-19 17:19:13.231119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.788 [2024-11-19 17:19:13.235064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.788 [2024-11-19 17:19:13.235152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.788 [2024-11-19 17:19:13.235173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.788 [2024-11-19 17:19:13.239090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.788 [2024-11-19 17:19:13.239192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.788 [2024-11-19 17:19:13.239213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.788 [2024-11-19 17:19:13.243087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.788 [2024-11-19 17:19:13.243175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.788 [2024-11-19 17:19:13.243196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.788 [2024-11-19 17:19:13.247026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.788 [2024-11-19 17:19:13.247100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.788 [2024-11-19 17:19:13.247122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.788 [2024-11-19 17:19:13.251006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.788 [2024-11-19 17:19:13.251079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.788 [2024-11-19 17:19:13.251100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.788 [2024-11-19 17:19:13.255015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.788 [2024-11-19 17:19:13.255088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.788 [2024-11-19 17:19:13.255109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.788 [2024-11-19 17:19:13.259001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.789 [2024-11-19 17:19:13.259072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.789 [2024-11-19 17:19:13.259092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.789 [2024-11-19 17:19:13.262987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.789 [2024-11-19 17:19:13.263074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.789 [2024-11-19 17:19:13.263095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.789 [2024-11-19 17:19:13.266942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.789 [2024-11-19 17:19:13.267014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.789 [2024-11-19 17:19:13.267035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.789 [2024-11-19 17:19:13.270905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.789 [2024-11-19 17:19:13.270993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.789 [2024-11-19 17:19:13.271016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.789 [2024-11-19 17:19:13.274929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.789 [2024-11-19 17:19:13.275018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.789 [2024-11-19 17:19:13.275039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.789 [2024-11-19 17:19:13.278915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.789 [2024-11-19 17:19:13.278998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.789 [2024-11-19 17:19:13.279020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.789 [2024-11-19 17:19:13.282914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.789 [2024-11-19 17:19:13.283001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.789 [2024-11-19 17:19:13.283022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.789 [2024-11-19 17:19:13.286944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.789 [2024-11-19 17:19:13.287018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.789 [2024-11-19 17:19:13.287040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.789 [2024-11-19 17:19:13.290972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.789 [2024-11-19 17:19:13.291060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.789 [2024-11-19 17:19:13.291081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.789 [2024-11-19 17:19:13.294978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.789 [2024-11-19 17:19:13.295051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.789 [2024-11-19 17:19:13.295073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.789 [2024-11-19 17:19:13.298957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.789 [2024-11-19 17:19:13.299027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.789 [2024-11-19 17:19:13.299048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.789 [2024-11-19 17:19:13.302982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.789 [2024-11-19 17:19:13.303057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.789 [2024-11-19 17:19:13.303079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.789 [2024-11-19 17:19:13.306982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.789 [2024-11-19 17:19:13.307051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.789 [2024-11-19 17:19:13.307072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.789 [2024-11-19 17:19:13.311014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.789 [2024-11-19 17:19:13.311089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.789 [2024-11-19 17:19:13.311110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.789 [2024-11-19 17:19:13.315033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:20.789 [2024-11-19 17:19:13.315102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.789 [2024-11-19 17:19:13.315131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.049 [2024-11-19 17:19:13.319104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.049 [2024-11-19 17:19:13.319176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.049 [2024-11-19 17:19:13.319199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.049 [2024-11-19 17:19:13.323054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.049 [2024-11-19 17:19:13.323127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.049 [2024-11-19 17:19:13.323148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.049 [2024-11-19 17:19:13.327147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.049 [2024-11-19 17:19:13.327234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.049 [2024-11-19 17:19:13.327255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.049 [2024-11-19 17:19:13.331119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.049 [2024-11-19 17:19:13.331199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.049 [2024-11-19 17:19:13.331220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.049 [2024-11-19 17:19:13.335150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.049 [2024-11-19 17:19:13.335235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.049 [2024-11-19 17:19:13.335256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.049 [2024-11-19 17:19:13.339125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.050 [2024-11-19 17:19:13.339212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.050 [2024-11-19 17:19:13.339233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.050 [2024-11-19 17:19:13.343208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.050 [2024-11-19 17:19:13.343288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.050 [2024-11-19 17:19:13.343310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.050 [2024-11-19 17:19:13.347157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.050 [2024-11-19 17:19:13.347226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.050 [2024-11-19 17:19:13.347247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.050 [2024-11-19 17:19:13.351230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.050 [2024-11-19 17:19:13.351303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.050 [2024-11-19 17:19:13.351323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.050 [2024-11-19 17:19:13.355122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.050 [2024-11-19 17:19:13.355193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.050 [2024-11-19 17:19:13.355214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.050 [2024-11-19 17:19:13.359143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.050 [2024-11-19 17:19:13.359247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.050 [2024-11-19 17:19:13.359269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.050 [2024-11-19 17:19:13.363054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.050 [2024-11-19 17:19:13.363124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.050 [2024-11-19 17:19:13.363145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.050 [2024-11-19 17:19:13.367058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.050 [2024-11-19 17:19:13.367145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.050 [2024-11-19 17:19:13.367182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.050 [2024-11-19 17:19:13.371023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.050 [2024-11-19 17:19:13.371111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.050 [2024-11-19 17:19:13.371132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.050 [2024-11-19 17:19:13.375005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.050 [2024-11-19 17:19:13.375093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.050 [2024-11-19 17:19:13.375115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.050 [2024-11-19 17:19:13.378982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.050 [2024-11-19 17:19:13.379059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.050 [2024-11-19 17:19:13.379080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.050 [2024-11-19 17:19:13.382956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.050 [2024-11-19 17:19:13.383040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.050 [2024-11-19 17:19:13.383062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.050 [2024-11-19 17:19:13.386956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.050 [2024-11-19 17:19:13.387028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.050 [2024-11-19 17:19:13.387049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.050 [2024-11-19 17:19:13.390937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.050 [2024-11-19 17:19:13.391024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.050 [2024-11-19 17:19:13.391046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.050 [2024-11-19 17:19:13.394877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.050 [2024-11-19 17:19:13.394955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.050 [2024-11-19 17:19:13.394975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.050 [2024-11-19 17:19:13.398848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.050 [2024-11-19 17:19:13.398930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.050 [2024-11-19 17:19:13.398952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.050 [2024-11-19 17:19:13.402871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.050 [2024-11-19 17:19:13.402951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.050 [2024-11-19 17:19:13.402972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.050 [2024-11-19 17:19:13.406849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.050 [2024-11-19 17:19:13.406932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.050 [2024-11-19 17:19:13.406953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.050 [2024-11-19 17:19:13.410839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.050 [2024-11-19 17:19:13.410917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.050 [2024-11-19 17:19:13.410938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.050 [2024-11-19 17:19:13.414809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.050 [2024-11-19 17:19:13.414894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.050 [2024-11-19 17:19:13.414915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.050 [2024-11-19 17:19:13.418829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.050 [2024-11-19 17:19:13.418907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.050 [2024-11-19 17:19:13.418928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.050 [2024-11-19 17:19:13.422829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.050 [2024-11-19 17:19:13.422905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.050 [2024-11-19 17:19:13.422927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.050 [2024-11-19 17:19:13.426802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.050 [2024-11-19 17:19:13.426872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.050 [2024-11-19 17:19:13.426893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.050 [2024-11-19 17:19:13.430816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.050 [2024-11-19 17:19:13.430909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.050 [2024-11-19 17:19:13.430932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.050 [2024-11-19 17:19:13.434779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.050 [2024-11-19 17:19:13.434868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.050 [2024-11-19 17:19:13.434889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.050 [2024-11-19 17:19:13.438839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.050 [2024-11-19 17:19:13.438920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.050 [2024-11-19 17:19:13.438942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.050 [2024-11-19 17:19:13.442816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.050 [2024-11-19 17:19:13.442894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.050 [2024-11-19 17:19:13.442915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.051 [2024-11-19 17:19:13.446792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.051 [2024-11-19 17:19:13.446873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.051 [2024-11-19 17:19:13.446894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.051 [2024-11-19 17:19:13.450758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.051 [2024-11-19 17:19:13.450839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.051 [2024-11-19 17:19:13.450860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.051 [2024-11-19 17:19:13.454854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.051 [2024-11-19 17:19:13.454965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.051 [2024-11-19 17:19:13.454987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.051 [2024-11-19 17:19:13.458872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.051 [2024-11-19 17:19:13.458951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.051 [2024-11-19 17:19:13.458972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.051 [2024-11-19 17:19:13.462864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.051 [2024-11-19 17:19:13.462946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.051 [2024-11-19 17:19:13.462968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.051 [2024-11-19 17:19:13.466761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.051 [2024-11-19 17:19:13.466851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.051 [2024-11-19 17:19:13.466872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.051 [2024-11-19 17:19:13.470685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.051 [2024-11-19 17:19:13.470794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.051 [2024-11-19 17:19:13.470817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.051 [2024-11-19 17:19:13.474640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.051 [2024-11-19 17:19:13.474766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.051 [2024-11-19 17:19:13.474787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.051 [2024-11-19 17:19:13.478598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.051 [2024-11-19 17:19:13.478685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.051 [2024-11-19 17:19:13.478706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.051 [2024-11-19 17:19:13.482614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.051 [2024-11-19 17:19:13.482693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.051 [2024-11-19 17:19:13.482714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.051 [2024-11-19 17:19:13.486587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.051 [2024-11-19 17:19:13.486678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.051 [2024-11-19 17:19:13.486698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.051 [2024-11-19 17:19:13.490613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.051 [2024-11-19 17:19:13.490682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.051 [2024-11-19 17:19:13.490703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.051 [2024-11-19 17:19:13.494548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.051 [2024-11-19 17:19:13.494621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.051 [2024-11-19 17:19:13.494642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.051 [2024-11-19 17:19:13.498524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.051 [2024-11-19 17:19:13.498597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.051 [2024-11-19 17:19:13.498618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.051 [2024-11-19 17:19:13.502474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.051 [2024-11-19 17:19:13.502546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.051 [2024-11-19 17:19:13.502567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.051 [2024-11-19 17:19:13.506469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.051 [2024-11-19 17:19:13.506542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.051 [2024-11-19 17:19:13.506562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.051 [2024-11-19 17:19:13.510469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.051 [2024-11-19 17:19:13.510540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.051 [2024-11-19 17:19:13.510561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.051 [2024-11-19 17:19:13.514573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.051 [2024-11-19 17:19:13.514646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.051 [2024-11-19 17:19:13.514667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.051 [2024-11-19 17:19:13.518595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.051 [2024-11-19 17:19:13.518667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.051 [2024-11-19 17:19:13.518688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.051 [2024-11-19 17:19:13.522751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.051 [2024-11-19 17:19:13.522826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.051 [2024-11-19 17:19:13.522847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.051 [2024-11-19 17:19:13.526795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.051 [2024-11-19 17:19:13.526900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.051 [2024-11-19 17:19:13.526923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.051 [2024-11-19 17:19:13.530762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.051 [2024-11-19 17:19:13.530868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.051 [2024-11-19 17:19:13.530890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.051 [2024-11-19 17:19:13.534844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.051 [2024-11-19 17:19:13.534935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.051 [2024-11-19 17:19:13.534956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.051 [2024-11-19 17:19:13.538862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.051 [2024-11-19 17:19:13.538937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.051 [2024-11-19 17:19:13.538959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.051 [2024-11-19 17:19:13.542928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.051 [2024-11-19 17:19:13.543002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.051 [2024-11-19 17:19:13.543024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.051 [2024-11-19 17:19:13.547041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.051 [2024-11-19 17:19:13.547151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.051 [2024-11-19 17:19:13.547171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.051 [2024-11-19 17:19:13.551104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.051 [2024-11-19 17:19:13.551178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.052 [2024-11-19 17:19:13.551200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.052 [2024-11-19 17:19:13.555184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.052 [2024-11-19 17:19:13.555256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.052 [2024-11-19 17:19:13.555277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.052 [2024-11-19 17:19:13.559205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.052 [2024-11-19 17:19:13.559272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.052 [2024-11-19 17:19:13.559294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.052 [2024-11-19 17:19:13.563256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.052 [2024-11-19 17:19:13.563343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.052 [2024-11-19 17:19:13.563364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.052 [2024-11-19 17:19:13.567304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.052 [2024-11-19 17:19:13.567393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.052 [2024-11-19 17:19:13.567414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.052 [2024-11-19 17:19:13.571356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.052 [2024-11-19 17:19:13.571434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.052 [2024-11-19 17:19:13.571454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.052 [2024-11-19 17:19:13.575281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.052 [2024-11-19 17:19:13.575351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.052 [2024-11-19 17:19:13.575372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.052 [2024-11-19 17:19:13.579302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.052 [2024-11-19 17:19:13.579371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.052 [2024-11-19 17:19:13.579393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.312 [2024-11-19 17:19:13.583250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.312 [2024-11-19 17:19:13.583321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.312 [2024-11-19 17:19:13.583342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.312 [2024-11-19 17:19:13.587350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.312 [2024-11-19 17:19:13.587423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.312 [2024-11-19 17:19:13.587444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.312 [2024-11-19 17:19:13.591319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.312 [2024-11-19 17:19:13.591391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.312 [2024-11-19 17:19:13.591411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.312 [2024-11-19 17:19:13.595363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.312 [2024-11-19 17:19:13.595445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.312 [2024-11-19 17:19:13.595466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.312 [2024-11-19 17:19:13.599382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.312 [2024-11-19 17:19:13.599453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.312 [2024-11-19 17:19:13.599474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.312 [2024-11-19 17:19:13.603364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.312 [2024-11-19 17:19:13.603436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.312 [2024-11-19 17:19:13.603457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.312 [2024-11-19 17:19:13.607410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.312 [2024-11-19 17:19:13.607481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.312 [2024-11-19 17:19:13.607502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.312 [2024-11-19 17:19:13.611413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.312 [2024-11-19 17:19:13.611483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.312 [2024-11-19 17:19:13.611504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.312 [2024-11-19 17:19:13.615398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.312 [2024-11-19 17:19:13.615471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.312 [2024-11-19 17:19:13.615491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.312 [2024-11-19 17:19:13.619541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.312 [2024-11-19 17:19:13.619616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.313 [2024-11-19 17:19:13.619638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.313 [2024-11-19 17:19:13.623616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.313 [2024-11-19 17:19:13.623687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.313 [2024-11-19 17:19:13.623708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.313 [2024-11-19 17:19:13.627663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.313 [2024-11-19 17:19:13.627746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.313 [2024-11-19 17:19:13.627780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.313 [2024-11-19 17:19:13.631723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.313 [2024-11-19 17:19:13.631807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.313 [2024-11-19 17:19:13.631829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.313 [2024-11-19 17:19:13.635775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.313 [2024-11-19 17:19:13.635850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.313 [2024-11-19 17:19:13.635871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.313 [2024-11-19 17:19:13.639856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.313 [2024-11-19 17:19:13.639944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.313 [2024-11-19 17:19:13.639965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.313 [2024-11-19 17:19:13.644158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.313 [2024-11-19 17:19:13.644249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.313 [2024-11-19 17:19:13.644271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.313 [2024-11-19 17:19:13.648328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.313 [2024-11-19 17:19:13.648451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.313 [2024-11-19 17:19:13.648473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.313 [2024-11-19 17:19:13.652511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.313 [2024-11-19 17:19:13.652604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.313 [2024-11-19 17:19:13.652626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.313 [2024-11-19 17:19:13.656821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.313 [2024-11-19 17:19:13.656905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.313 [2024-11-19 17:19:13.656927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.313 [2024-11-19 17:19:13.661039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.313 [2024-11-19 17:19:13.661115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.313 [2024-11-19 17:19:13.661137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.313 [2024-11-19 17:19:13.665222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.313 [2024-11-19 17:19:13.665293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.313 [2024-11-19 17:19:13.665315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.313 [2024-11-19 17:19:13.669468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.313 [2024-11-19 17:19:13.669543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.313 [2024-11-19 17:19:13.669565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.313 [2024-11-19 17:19:13.673682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.313 [2024-11-19 17:19:13.673778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.313 [2024-11-19 17:19:13.673800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.313 [2024-11-19 17:19:13.677764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.313 [2024-11-19 17:19:13.677880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.313 [2024-11-19 17:19:13.677901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.313 [2024-11-19 17:19:13.681855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.313 [2024-11-19 17:19:13.681948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.313 [2024-11-19 17:19:13.681969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.313 [2024-11-19 17:19:13.685845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.313 [2024-11-19 17:19:13.685916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.313 [2024-11-19 17:19:13.685937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.313 [2024-11-19 17:19:13.689942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.313 [2024-11-19 17:19:13.690014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.313 [2024-11-19 17:19:13.690035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.313 [2024-11-19 17:19:13.693877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.313 [2024-11-19 17:19:13.693981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.313 [2024-11-19 17:19:13.694003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.313 [2024-11-19 17:19:13.697804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.313 [2024-11-19 17:19:13.697895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.313 [2024-11-19 17:19:13.697916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.313 [2024-11-19 17:19:13.701699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.313 [2024-11-19 17:19:13.701791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.313 [2024-11-19 17:19:13.701813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.313 [2024-11-19 17:19:13.705660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.313 [2024-11-19 17:19:13.705752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.313 [2024-11-19 17:19:13.705774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.313 [2024-11-19 17:19:13.709632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.313 [2024-11-19 17:19:13.709711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.313 [2024-11-19 17:19:13.709732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.314 [2024-11-19 17:19:13.713695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.314 [2024-11-19 17:19:13.713788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.314 [2024-11-19 17:19:13.713810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.314 [2024-11-19 17:19:13.717657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.314 [2024-11-19 17:19:13.717751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.314 [2024-11-19 17:19:13.717773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.314 [2024-11-19 17:19:13.721820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.314 [2024-11-19 17:19:13.721927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.314 [2024-11-19 17:19:13.721948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.314 [2024-11-19 17:19:13.725820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.314 [2024-11-19 17:19:13.725911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.314 [2024-11-19 17:19:13.725932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.314 [2024-11-19 17:19:13.729871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.314 [2024-11-19 17:19:13.729940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.314 [2024-11-19 17:19:13.729962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.314 [2024-11-19 17:19:13.733970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.314 [2024-11-19 17:19:13.734043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.314 [2024-11-19 17:19:13.734065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.314 [2024-11-19 17:19:13.737996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.314 [2024-11-19 17:19:13.738094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.314 [2024-11-19 17:19:13.738115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.314 [2024-11-19 17:19:13.741993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.314 [2024-11-19 17:19:13.742065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.314 [2024-11-19 17:19:13.742086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.314 7646.00 IOPS, 955.75 MiB/s [2024-11-19T17:19:13.847Z] [2024-11-19 17:19:13.747032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.314 [2024-11-19 17:19:13.747134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.314 [2024-11-19 17:19:13.747156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.314 [2024-11-19 17:19:13.751236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.314 [2024-11-19 17:19:13.751326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.314 [2024-11-19 17:19:13.751366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.314 [2024-11-19 17:19:13.755422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.314 [2024-11-19 17:19:13.755498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.314 [2024-11-19 17:19:13.755520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.314 [2024-11-19 17:19:13.759633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.314 [2024-11-19 17:19:13.759705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.314 [2024-11-19 17:19:13.759726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.314 [2024-11-19 17:19:13.763648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.314 [2024-11-19 17:19:13.763727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.314 [2024-11-19 17:19:13.763763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.314 [2024-11-19 17:19:13.767688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.314 [2024-11-19 17:19:13.767778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.314 [2024-11-19 17:19:13.767800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.314 [2024-11-19 17:19:13.771720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.314 [2024-11-19 17:19:13.771827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.314 [2024-11-19 17:19:13.771849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.314 [2024-11-19 17:19:13.775800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.314 [2024-11-19 17:19:13.775880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.314 [2024-11-19 17:19:13.775902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.314 [2024-11-19 17:19:13.779800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.314 [2024-11-19 17:19:13.779889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.314 [2024-11-19 17:19:13.779910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.314 [2024-11-19 17:19:13.783801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.314 [2024-11-19 17:19:13.783874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.314 [2024-11-19 17:19:13.783896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.314 [2024-11-19 17:19:13.787814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.314 [2024-11-19 17:19:13.787892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.314 [2024-11-19 17:19:13.787914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.314 [2024-11-19 17:19:13.791834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.314 [2024-11-19 17:19:13.791913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.314 [2024-11-19 17:19:13.791934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.314 [2024-11-19 17:19:13.795849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.314 [2024-11-19 17:19:13.795928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.314 [2024-11-19 17:19:13.795950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.314 [2024-11-19 17:19:13.799845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.314 [2024-11-19 17:19:13.799918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.314 [2024-11-19 17:19:13.799939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.314 [2024-11-19 17:19:13.803803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.314 [2024-11-19 17:19:13.803880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.314 [2024-11-19 17:19:13.803901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.314 [2024-11-19 17:19:13.807903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.314 [2024-11-19 17:19:13.807981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.314 [2024-11-19 17:19:13.808003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.314 [2024-11-19 17:19:13.811860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.315 [2024-11-19 17:19:13.811939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.315 [2024-11-19 17:19:13.811960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.315 [2024-11-19 17:19:13.815783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.315 [2024-11-19 17:19:13.815863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.315 [2024-11-19 17:19:13.815885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.315 [2024-11-19 17:19:13.819718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.315 [2024-11-19 17:19:13.819812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.315 [2024-11-19 17:19:13.819833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.315 [2024-11-19 17:19:13.823746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.315 [2024-11-19 17:19:13.823838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.315 [2024-11-19 17:19:13.823860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.315 [2024-11-19 17:19:13.827698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.315 [2024-11-19 17:19:13.827793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.315 [2024-11-19 17:19:13.827815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.315 [2024-11-19 17:19:13.831698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.315 [2024-11-19 17:19:13.831790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.315 [2024-11-19 17:19:13.831811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.315 [2024-11-19 17:19:13.835638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.315 [2024-11-19 17:19:13.835718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.315 [2024-11-19 17:19:13.835754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.315 [2024-11-19 17:19:13.839713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.315 [2024-11-19 17:19:13.839794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.315 [2024-11-19 17:19:13.839815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.574 [2024-11-19 17:19:13.843658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.574 [2024-11-19 17:19:13.843751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.574 [2024-11-19 17:19:13.843773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.574 [2024-11-19 17:19:13.847866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.575 [2024-11-19 17:19:13.847973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.575 [2024-11-19 17:19:13.847994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.575 [2024-11-19 17:19:13.851847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.575 [2024-11-19 17:19:13.851925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.575 [2024-11-19 17:19:13.851946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.575 [2024-11-19 17:19:13.855906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.575 [2024-11-19 17:19:13.855995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.575 [2024-11-19 17:19:13.856016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.575 [2024-11-19 17:19:13.859868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.575 [2024-11-19 17:19:13.859946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.575 [2024-11-19 17:19:13.859967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.575 [2024-11-19 17:19:13.863883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.575 [2024-11-19 17:19:13.863963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.575 [2024-11-19 17:19:13.863984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.575 [2024-11-19 17:19:13.867792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.575 [2024-11-19 17:19:13.867864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.575 [2024-11-19 17:19:13.867885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.575 [2024-11-19 17:19:13.871849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.575 [2024-11-19 17:19:13.871922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.575 [2024-11-19 17:19:13.871943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.575 [2024-11-19 17:19:13.875821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.575 [2024-11-19 17:19:13.875899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.575 [2024-11-19 17:19:13.875921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.575 [2024-11-19 17:19:13.879878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.575 [2024-11-19 17:19:13.879957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.575 [2024-11-19 17:19:13.879978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.575 [2024-11-19 17:19:13.883825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.575 [2024-11-19 17:19:13.883933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.575 [2024-11-19 17:19:13.883954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.575 [2024-11-19 17:19:13.887857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.575 [2024-11-19 17:19:13.887935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.575 [2024-11-19 17:19:13.887957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.575 [2024-11-19 17:19:13.891822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.575 [2024-11-19 17:19:13.891901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.575 [2024-11-19 17:19:13.891922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.575 [2024-11-19 17:19:13.895881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.575 [2024-11-19 17:19:13.895964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.575 [2024-11-19 17:19:13.895985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.575 [2024-11-19 17:19:13.899838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.575 [2024-11-19 17:19:13.899916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.575 [2024-11-19 17:19:13.899937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.575 [2024-11-19 17:19:13.903829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.575 [2024-11-19 17:19:13.903919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.575 [2024-11-19 17:19:13.903940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.575 [2024-11-19 17:19:13.907788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.575 [2024-11-19 17:19:13.907879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.575 [2024-11-19 17:19:13.907899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.575 [2024-11-19 17:19:13.911743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.575 [2024-11-19 17:19:13.911834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.575 [2024-11-19 17:19:13.911855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.575 [2024-11-19 17:19:13.915699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.575 [2024-11-19 17:19:13.915784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.575 [2024-11-19 17:19:13.915805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.575 [2024-11-19 17:19:13.919741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.575 [2024-11-19 17:19:13.919832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.575 [2024-11-19 17:19:13.919854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.575 [2024-11-19 17:19:13.923712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.575 [2024-11-19 17:19:13.923804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.575 [2024-11-19 17:19:13.923826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.575 [2024-11-19 17:19:13.927743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.575 [2024-11-19 17:19:13.927845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.575 [2024-11-19 17:19:13.927867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.575 [2024-11-19 17:19:13.931657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.575 [2024-11-19 17:19:13.931748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.575 [2024-11-19 17:19:13.931771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.575 [2024-11-19 17:19:13.935648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.575 [2024-11-19 17:19:13.935727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.575 [2024-11-19 17:19:13.935749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.576 [2024-11-19 17:19:13.939628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.576 [2024-11-19 17:19:13.939706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.576 [2024-11-19 17:19:13.939727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.576 [2024-11-19 17:19:13.943657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.576 [2024-11-19 17:19:13.943735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.576 [2024-11-19 17:19:13.943757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.576 [2024-11-19 17:19:13.947586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.576 [2024-11-19 17:19:13.947672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.576 [2024-11-19 17:19:13.947694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.576 [2024-11-19 17:19:13.951627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.576 [2024-11-19 17:19:13.951706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.576 [2024-11-19 17:19:13.951728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.576 [2024-11-19 17:19:13.955568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.576 [2024-11-19 17:19:13.955656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.576 [2024-11-19 17:19:13.955677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.576 [2024-11-19 17:19:13.959538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.576 [2024-11-19 17:19:13.959616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.576 [2024-11-19 17:19:13.959638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.576 [2024-11-19 17:19:13.963485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.576 [2024-11-19 17:19:13.963573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.576 [2024-11-19 17:19:13.963594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.576 [2024-11-19 17:19:13.967519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.576 [2024-11-19 17:19:13.967608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.576 [2024-11-19 17:19:13.967629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.576 [2024-11-19 17:19:13.971508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.576 [2024-11-19 17:19:13.971581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.576 [2024-11-19 17:19:13.971602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.576 [2024-11-19 17:19:13.975553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.576 [2024-11-19 17:19:13.975631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.576 [2024-11-19 17:19:13.975652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.576 [2024-11-19 17:19:13.979494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.576 [2024-11-19 17:19:13.979573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.576 [2024-11-19 17:19:13.979595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.576 [2024-11-19 17:19:13.983517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.576 [2024-11-19 17:19:13.983603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.576 [2024-11-19 17:19:13.983625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.576 [2024-11-19 17:19:13.987469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.576 [2024-11-19 17:19:13.987539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.576 [2024-11-19 17:19:13.987560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.576 [2024-11-19 17:19:13.991482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.576 [2024-11-19 17:19:13.991553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.576 [2024-11-19 17:19:13.991574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.576 [2024-11-19 17:19:13.995461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.576 [2024-11-19 17:19:13.995546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.576 [2024-11-19 17:19:13.995567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.576 [2024-11-19 17:19:13.999453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.576 [2024-11-19 17:19:13.999539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.576 [2024-11-19 17:19:13.999560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.576 [2024-11-19 17:19:14.003411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.576 [2024-11-19 17:19:14.003481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.576 [2024-11-19 17:19:14.003502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.576 [2024-11-19 17:19:14.007427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.576 [2024-11-19 17:19:14.007499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.576 [2024-11-19 17:19:14.007520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.576 [2024-11-19 17:19:14.011360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.576 [2024-11-19 17:19:14.011447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.576 [2024-11-19 17:19:14.011469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.576 [2024-11-19 17:19:14.015390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.576 [2024-11-19 17:19:14.015477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.576 [2024-11-19 17:19:14.015498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.576 [2024-11-19 17:19:14.019333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.576 [2024-11-19 17:19:14.019402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.576 [2024-11-19 17:19:14.019422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.576 [2024-11-19 17:19:14.023323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.576 [2024-11-19 17:19:14.023394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.576 [2024-11-19 17:19:14.023415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.576 [2024-11-19 17:19:14.027301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.576 [2024-11-19 17:19:14.027372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.576 [2024-11-19 17:19:14.027393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.576 [2024-11-19 17:19:14.031283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.576 [2024-11-19 17:19:14.031354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.576 [2024-11-19 17:19:14.031375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.576 [2024-11-19 17:19:14.035253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.576 [2024-11-19 17:19:14.035335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.576 [2024-11-19 17:19:14.035355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.576 [2024-11-19 17:19:14.039265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.577 [2024-11-19 17:19:14.039345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.577 [2024-11-19 17:19:14.039366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.577 [2024-11-19 17:19:14.043235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.577 [2024-11-19 17:19:14.043325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.577 [2024-11-19 17:19:14.043347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.577 [2024-11-19 17:19:14.047222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.577 [2024-11-19 17:19:14.047303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.577 [2024-11-19 17:19:14.047326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.577 [2024-11-19 17:19:14.051226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.577 [2024-11-19 17:19:14.051315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.577 [2024-11-19 17:19:14.051335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.577 [2024-11-19 17:19:14.055238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.577 [2024-11-19 17:19:14.055313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.577 [2024-11-19 17:19:14.055334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.577 [2024-11-19 17:19:14.059134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.577 [2024-11-19 17:19:14.059204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.577 [2024-11-19 17:19:14.059225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.577 [2024-11-19 17:19:14.063156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.577 [2024-11-19 17:19:14.063243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.577 [2024-11-19 17:19:14.063264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.577 [2024-11-19 17:19:14.067129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.577 [2024-11-19 17:19:14.067200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.577 [2024-11-19 17:19:14.067221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.577 [2024-11-19 17:19:14.071150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.577 [2024-11-19 17:19:14.071251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.577 [2024-11-19 17:19:14.071272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.577 [2024-11-19 17:19:14.075117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.577 [2024-11-19 17:19:14.075194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.577 [2024-11-19 17:19:14.075216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.577 [2024-11-19 17:19:14.079093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.577 [2024-11-19 17:19:14.079166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.577 [2024-11-19 17:19:14.079201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.577 [2024-11-19 17:19:14.083110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.577 [2024-11-19 17:19:14.083197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.577 [2024-11-19 17:19:14.083219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.577 [2024-11-19 17:19:14.087096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.577 [2024-11-19 17:19:14.087199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.577 [2024-11-19 17:19:14.087221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.577 [2024-11-19 17:19:14.091079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.577 [2024-11-19 17:19:14.091150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.577 [2024-11-19 17:19:14.091171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.577 [2024-11-19 17:19:14.095113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.577 [2024-11-19 17:19:14.095201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.577 [2024-11-19 17:19:14.095222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.577 [2024-11-19 17:19:14.099116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.577 [2024-11-19 17:19:14.099187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.577 [2024-11-19 17:19:14.099208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.577 [2024-11-19 17:19:14.103097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.577 [2024-11-19 17:19:14.103179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.577 [2024-11-19 17:19:14.103201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.837 [2024-11-19 17:19:14.107097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.837 [2024-11-19 17:19:14.107168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.837 [2024-11-19 17:19:14.107189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.837 [2024-11-19 17:19:14.111185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.837 [2024-11-19 17:19:14.111272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.837 [2024-11-19 17:19:14.111293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.837 [2024-11-19 17:19:14.115198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.837 [2024-11-19 17:19:14.115269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.837 [2024-11-19 17:19:14.115290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.837 [2024-11-19 17:19:14.119246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.837 [2024-11-19 17:19:14.119318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.837 [2024-11-19 17:19:14.119339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.837 [2024-11-19 17:19:14.123250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.837 [2024-11-19 17:19:14.123326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.837 [2024-11-19 17:19:14.123347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.837 [2024-11-19 17:19:14.127338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.837 [2024-11-19 17:19:14.127414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.837 [2024-11-19 17:19:14.127435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.837 [2024-11-19 17:19:14.131303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.837 [2024-11-19 17:19:14.131388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.837 [2024-11-19 17:19:14.131409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.837 [2024-11-19 17:19:14.135425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.837 [2024-11-19 17:19:14.135494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.837 [2024-11-19 17:19:14.135514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.837 [2024-11-19 17:19:14.139394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.837 [2024-11-19 17:19:14.139466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.837 [2024-11-19 17:19:14.139486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.837 [2024-11-19 17:19:14.143461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.837 [2024-11-19 17:19:14.143532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.837 [2024-11-19 17:19:14.143552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.837 [2024-11-19 17:19:14.147456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.837 [2024-11-19 17:19:14.147529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.837 [2024-11-19 17:19:14.147550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.837 [2024-11-19 17:19:14.151606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.837 [2024-11-19 17:19:14.151698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.837 [2024-11-19 17:19:14.151719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.837 [2024-11-19 17:19:14.155637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.837 [2024-11-19 17:19:14.155730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.837 [2024-11-19 17:19:14.155752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.837 [2024-11-19 17:19:14.159669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.837 [2024-11-19 17:19:14.159778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.837 [2024-11-19 17:19:14.159814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.837 [2024-11-19 17:19:14.163809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.837 [2024-11-19 17:19:14.163896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.837 [2024-11-19 17:19:14.163918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.837 [2024-11-19 17:19:14.167977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.837 [2024-11-19 17:19:14.168073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.837 [2024-11-19 17:19:14.168095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.837 [2024-11-19 17:19:14.172017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.837 [2024-11-19 17:19:14.172118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.837 [2024-11-19 17:19:14.172140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.837 [2024-11-19 17:19:14.176139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.837 [2024-11-19 17:19:14.176212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.837 [2024-11-19 17:19:14.176233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.837 [2024-11-19 17:19:14.180291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.837 [2024-11-19 17:19:14.180364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.837 [2024-11-19 17:19:14.180387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.838 [2024-11-19 17:19:14.184401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.838 [2024-11-19 17:19:14.184474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.838 [2024-11-19 17:19:14.184496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.838 [2024-11-19 17:19:14.188483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.838 [2024-11-19 17:19:14.188557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.838 [2024-11-19 17:19:14.188579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.838 [2024-11-19 17:19:14.192584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.838 [2024-11-19 17:19:14.192662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.838 [2024-11-19 17:19:14.192684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.838 [2024-11-19 17:19:14.196676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.838 [2024-11-19 17:19:14.196762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.838 [2024-11-19 17:19:14.196784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.838 [2024-11-19 17:19:14.200884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.838 [2024-11-19 17:19:14.200982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.838 [2024-11-19 17:19:14.201004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.838 [2024-11-19 17:19:14.205038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.838 [2024-11-19 17:19:14.205132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.838 [2024-11-19 17:19:14.205154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.838 [2024-11-19 17:19:14.209177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.838 [2024-11-19 17:19:14.209279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.838 [2024-11-19 17:19:14.209301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.838 [2024-11-19 17:19:14.213317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.838 [2024-11-19 17:19:14.213406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.838 [2024-11-19 17:19:14.213428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.838 [2024-11-19 17:19:14.217439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.838 [2024-11-19 17:19:14.217530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.838 [2024-11-19 17:19:14.217552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.838 [2024-11-19 17:19:14.221566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.838 [2024-11-19 17:19:14.221678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.838 [2024-11-19 17:19:14.221700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.838 [2024-11-19 17:19:14.225652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.838 [2024-11-19 17:19:14.225757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.838 [2024-11-19 17:19:14.225779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.838 [2024-11-19 17:19:14.229830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.838 [2024-11-19 17:19:14.229919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.838 [2024-11-19 17:19:14.229941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.838 [2024-11-19 17:19:14.233901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.838 [2024-11-19 17:19:14.234004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.838 [2024-11-19 17:19:14.234026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.838 [2024-11-19 17:19:14.237974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.838 [2024-11-19 17:19:14.238077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.838 [2024-11-19 17:19:14.238099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.838 [2024-11-19 17:19:14.242096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.838 [2024-11-19 17:19:14.242200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.838 [2024-11-19 17:19:14.242221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.838 [2024-11-19 17:19:14.246213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.838 [2024-11-19 17:19:14.246305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.838 [2024-11-19 17:19:14.246328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.838 [2024-11-19 17:19:14.250334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.838 [2024-11-19 17:19:14.250446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.838 [2024-11-19 17:19:14.250469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.838 [2024-11-19 17:19:14.254442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.838 [2024-11-19 17:19:14.254533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.838 [2024-11-19 17:19:14.254555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.838 [2024-11-19 17:19:14.258579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.838 [2024-11-19 17:19:14.258691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.838 [2024-11-19 17:19:14.258715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.838 [2024-11-19 17:19:14.262665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.838 [2024-11-19 17:19:14.262768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.838 [2024-11-19 17:19:14.262791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.838 [2024-11-19 17:19:14.266811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.838 [2024-11-19 17:19:14.266904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.838 [2024-11-19 17:19:14.266926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.838 [2024-11-19 17:19:14.270920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.838 [2024-11-19 17:19:14.271009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.838 [2024-11-19 17:19:14.271031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.838 [2024-11-19 17:19:14.275035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.838 [2024-11-19 17:19:14.275139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.838 [2024-11-19 17:19:14.275161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.838 [2024-11-19 17:19:14.279125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.838 [2024-11-19 17:19:14.279221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.838 [2024-11-19 17:19:14.279243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.839 [2024-11-19 17:19:14.283269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.839 [2024-11-19 17:19:14.283362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.839 [2024-11-19 17:19:14.283383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.839 [2024-11-19 17:19:14.287375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.839 [2024-11-19 17:19:14.287466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.839 [2024-11-19 17:19:14.287487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.839 [2024-11-19 17:19:14.291524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.839 [2024-11-19 17:19:14.291636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.839 [2024-11-19 17:19:14.291658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.839 [2024-11-19 17:19:14.295654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.839 [2024-11-19 17:19:14.295755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.839 [2024-11-19 17:19:14.295778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.839 [2024-11-19 17:19:14.299810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.839 [2024-11-19 17:19:14.299900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.839 [2024-11-19 17:19:14.299922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.839 [2024-11-19 17:19:14.303947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.839 [2024-11-19 17:19:14.304047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.839 [2024-11-19 17:19:14.304069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.839 [2024-11-19 17:19:14.308091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.839 [2024-11-19 17:19:14.308180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.839 [2024-11-19 17:19:14.308202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.839 [2024-11-19 17:19:14.312256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.839 [2024-11-19 17:19:14.312347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.839 [2024-11-19 17:19:14.312368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.839 [2024-11-19 17:19:14.316391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.839 [2024-11-19 17:19:14.316495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.839 [2024-11-19 17:19:14.316517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.839 [2024-11-19 17:19:14.320604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.839 [2024-11-19 17:19:14.320677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.839 [2024-11-19 17:19:14.320699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.839 [2024-11-19 17:19:14.324771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.839 [2024-11-19 17:19:14.324846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.839 [2024-11-19 17:19:14.324868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.839 [2024-11-19 17:19:14.328952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.839 [2024-11-19 17:19:14.329039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.839 [2024-11-19 17:19:14.329060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.839 [2024-11-19 17:19:14.333037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.839 [2024-11-19 17:19:14.333107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.839 [2024-11-19 17:19:14.333128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.839 [2024-11-19 17:19:14.337184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.839 [2024-11-19 17:19:14.337261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.839 [2024-11-19 17:19:14.337282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.839 [2024-11-19 17:19:14.341232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.839 [2024-11-19 17:19:14.341303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.839 [2024-11-19 17:19:14.341324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.839 [2024-11-19 17:19:14.345230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.839 [2024-11-19 17:19:14.345315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.839 [2024-11-19 17:19:14.345336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.839 [2024-11-19 17:19:14.349268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.839 [2024-11-19 17:19:14.349345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.839 [2024-11-19 17:19:14.349366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.839 [2024-11-19 17:19:14.353381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.839 [2024-11-19 17:19:14.353457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.839 [2024-11-19 17:19:14.353478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.839 [2024-11-19 17:19:14.357402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.839 [2024-11-19 17:19:14.357479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.839 [2024-11-19 17:19:14.357501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.839 [2024-11-19 17:19:14.361473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.839 [2024-11-19 17:19:14.361551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.840 [2024-11-19 17:19:14.361572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.840 [2024-11-19 17:19:14.365412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:21.840 [2024-11-19 17:19:14.365499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.840 [2024-11-19 17:19:14.365520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:22.099 [2024-11-19 17:19:14.369473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.099 [2024-11-19 17:19:14.369574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.099 [2024-11-19 17:19:14.369594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:22.099 [2024-11-19 17:19:14.373470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.099 [2024-11-19 17:19:14.373540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.099 [2024-11-19 17:19:14.373561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:22.099 [2024-11-19 17:19:14.377534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.099 [2024-11-19 17:19:14.377604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.099 [2024-11-19 17:19:14.377626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:22.099 [2024-11-19 17:19:14.381457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.099 [2024-11-19 17:19:14.381527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.099 [2024-11-19 17:19:14.381548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:22.099 [2024-11-19 17:19:14.385497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.099 [2024-11-19 17:19:14.385568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.099 [2024-11-19 17:19:14.385590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:22.099 [2024-11-19 17:19:14.389537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.099 [2024-11-19 17:19:14.389608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.099 [2024-11-19 17:19:14.389629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:22.099 [2024-11-19 17:19:14.393545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.099 [2024-11-19 17:19:14.393615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.099 [2024-11-19 17:19:14.393637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:22.099 [2024-11-19 17:19:14.397551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.099 [2024-11-19 17:19:14.397622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.099 [2024-11-19 17:19:14.397643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:22.099 [2024-11-19 17:19:14.401578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.099 [2024-11-19 17:19:14.401648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.099 [2024-11-19 17:19:14.401669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:22.100 [2024-11-19 17:19:14.405582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.100 [2024-11-19 17:19:14.405653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.100 [2024-11-19 17:19:14.405674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:22.100 [2024-11-19 17:19:14.409636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.100 [2024-11-19 17:19:14.409721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.100 [2024-11-19 17:19:14.409756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:22.100 [2024-11-19 17:19:14.413559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.100 [2024-11-19 17:19:14.413640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.100 [2024-11-19 17:19:14.413660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:22.100 [2024-11-19 17:19:14.417622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.100 [2024-11-19 17:19:14.417705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.100 [2024-11-19 17:19:14.417726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:22.100 [2024-11-19 17:19:14.421565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.100 [2024-11-19 17:19:14.421635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.100 [2024-11-19 17:19:14.421656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:22.100 [2024-11-19 17:19:14.425638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.100 [2024-11-19 17:19:14.425709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.100 [2024-11-19 17:19:14.425731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:22.100 [2024-11-19 17:19:14.429635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.100 [2024-11-19 17:19:14.429706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.100 [2024-11-19 17:19:14.429727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:22.100 [2024-11-19 17:19:14.433704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.100 [2024-11-19 17:19:14.433789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.100 [2024-11-19 17:19:14.433811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:22.100 [2024-11-19 17:19:14.437713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.100 [2024-11-19 17:19:14.437798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.100 [2024-11-19 17:19:14.437819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:22.100 [2024-11-19 17:19:14.441711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.100 [2024-11-19 17:19:14.441798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.100 [2024-11-19 17:19:14.441819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:22.100 [2024-11-19 17:19:14.445712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.100 [2024-11-19 17:19:14.445797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.100 [2024-11-19 17:19:14.445818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:22.100 [2024-11-19 17:19:14.449700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.100 [2024-11-19 17:19:14.449782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.100 [2024-11-19 17:19:14.449804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:22.100 [2024-11-19 17:19:14.453697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.100 [2024-11-19 17:19:14.453784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.100 [2024-11-19 17:19:14.453805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:22.100 [2024-11-19 17:19:14.457817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.100 [2024-11-19 17:19:14.457888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.100 [2024-11-19 17:19:14.457909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:22.100 [2024-11-19 17:19:14.461807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.100 [2024-11-19 17:19:14.461880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.100 [2024-11-19 17:19:14.461901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:22.100 [2024-11-19 17:19:14.465858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.100 [2024-11-19 17:19:14.465929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.100 [2024-11-19 17:19:14.465950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:22.100 [2024-11-19 17:19:14.469791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.100 [2024-11-19 17:19:14.469871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.100 [2024-11-19 17:19:14.469892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:22.100 [2024-11-19 17:19:14.473711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.100 [2024-11-19 17:19:14.473799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.100 [2024-11-19 17:19:14.473820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:22.100 [2024-11-19 17:19:14.477657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.100 [2024-11-19 17:19:14.477727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.100 [2024-11-19 17:19:14.477764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:22.100 [2024-11-19 17:19:14.481712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.100 [2024-11-19 17:19:14.481800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.100 [2024-11-19 17:19:14.481821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:22.100 [2024-11-19 17:19:14.485699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.100 [2024-11-19 17:19:14.485786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.100 [2024-11-19 17:19:14.485808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:22.100 [2024-11-19 17:19:14.489686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.100 [2024-11-19 17:19:14.489772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.100 [2024-11-19 17:19:14.489794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:22.100 [2024-11-19 17:19:14.493627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.100 [2024-11-19 17:19:14.493715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.100 [2024-11-19 17:19:14.493750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:22.100 [2024-11-19 17:19:14.497634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.100 [2024-11-19 17:19:14.497716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.100 [2024-11-19 17:19:14.497751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:22.100 [2024-11-19 17:19:14.501605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.100 [2024-11-19 17:19:14.501681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.101 [2024-11-19 17:19:14.501702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:22.101 [2024-11-19 17:19:14.505585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.101 [2024-11-19 17:19:14.505655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.101 [2024-11-19 17:19:14.505675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:22.101 [2024-11-19 17:19:14.509550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.101 [2024-11-19 17:19:14.509635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.101 [2024-11-19 17:19:14.509656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:22.101 [2024-11-19 17:19:14.513581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.101 [2024-11-19 17:19:14.513667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.101 [2024-11-19 17:19:14.513687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:22.101 [2024-11-19 17:19:14.517519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.101 [2024-11-19 17:19:14.517596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.101 [2024-11-19 17:19:14.517617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:22.101 [2024-11-19 17:19:14.521578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.101 [2024-11-19 17:19:14.521651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.101 [2024-11-19 17:19:14.521672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:22.101 [2024-11-19 17:19:14.525639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.101 [2024-11-19 17:19:14.525710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.101 [2024-11-19 17:19:14.525731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:22.101 [2024-11-19 17:19:14.529684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.101 [2024-11-19 17:19:14.529787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.101 [2024-11-19 17:19:14.529808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:22.101 [2024-11-19 17:19:14.533715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.101 [2024-11-19 17:19:14.533802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.101 [2024-11-19 17:19:14.533824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:22.101 [2024-11-19 17:19:14.537727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.101 [2024-11-19 17:19:14.537829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.101 [2024-11-19 17:19:14.537850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:22.101 [2024-11-19 17:19:14.541674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.101 [2024-11-19 17:19:14.541774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.101 [2024-11-19 17:19:14.541796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:22.101 [2024-11-19 17:19:14.545774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.101 [2024-11-19 17:19:14.545853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.101 [2024-11-19 17:19:14.545874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:22.101 [2024-11-19 17:19:14.549757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.101 [2024-11-19 17:19:14.549836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.101 [2024-11-19 17:19:14.549857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:22.101 [2024-11-19 17:19:14.554065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.101 [2024-11-19 17:19:14.554137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.101 [2024-11-19 17:19:14.554159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:22.101 [2024-11-19 17:19:14.558212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.101 [2024-11-19 17:19:14.558298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.101 [2024-11-19 17:19:14.558319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:22.101 [2024-11-19 17:19:14.562349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.101 [2024-11-19 17:19:14.562426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.101 [2024-11-19 17:19:14.562447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:22.101 [2024-11-19 17:19:14.566427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.101 [2024-11-19 17:19:14.566504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.101 [2024-11-19 17:19:14.566525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:22.101 [2024-11-19 17:19:14.570567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.101 [2024-11-19 17:19:14.570652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.101 [2024-11-19 17:19:14.570673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:22.101 [2024-11-19 17:19:14.574614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.101 [2024-11-19 17:19:14.574690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.101 [2024-11-19 17:19:14.574711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:22.101 [2024-11-19 17:19:14.578715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.101 [2024-11-19 17:19:14.578811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.101 [2024-11-19 17:19:14.578832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:22.101 [2024-11-19 17:19:14.582754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.101 [2024-11-19 17:19:14.582857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.102 [2024-11-19 17:19:14.582878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:22.102 [2024-11-19 17:19:14.586750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.102 [2024-11-19 17:19:14.586808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.102 [2024-11-19 17:19:14.586829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:22.102 [2024-11-19 17:19:14.590682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.102 [2024-11-19 17:19:14.590799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.102 [2024-11-19 17:19:14.590821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:22.102 [2024-11-19 17:19:14.594615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.102 [2024-11-19 17:19:14.594686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.102 [2024-11-19 17:19:14.594707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:22.102 [2024-11-19 17:19:14.598570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.102 [2024-11-19 17:19:14.598639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.102 [2024-11-19 17:19:14.598660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:22.102 [2024-11-19 17:19:14.602568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.102 [2024-11-19 17:19:14.602639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.102 [2024-11-19 17:19:14.602660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:22.102 [2024-11-19 17:19:14.606501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.102 [2024-11-19 17:19:14.606571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.102 [2024-11-19 17:19:14.606592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:22.102 [2024-11-19 17:19:14.610539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.102 [2024-11-19 17:19:14.610625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.102 [2024-11-19 17:19:14.610646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:22.102 [2024-11-19 17:19:14.614465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.102 [2024-11-19 17:19:14.614535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.102 [2024-11-19 17:19:14.614556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:22.102 [2024-11-19 17:19:14.618493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.102 [2024-11-19 17:19:14.618559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.102 [2024-11-19 17:19:14.618581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:22.102 [2024-11-19 17:19:14.622477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.102 [2024-11-19 17:19:14.622554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.102 [2024-11-19 17:19:14.622574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:22.102 [2024-11-19 17:19:14.626477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.102 [2024-11-19 17:19:14.626555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.102 [2024-11-19 17:19:14.626576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:22.361 [2024-11-19 17:19:14.630495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.361 [2024-11-19 17:19:14.630634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.361 [2024-11-19 17:19:14.630655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:22.361 [2024-11-19 17:19:14.634516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.361 [2024-11-19 17:19:14.634603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.361 [2024-11-19 17:19:14.634624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:22.361 [2024-11-19 17:19:14.638540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.361 [2024-11-19 17:19:14.638610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.361 [2024-11-19 17:19:14.638631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:22.361 [2024-11-19 17:19:14.642553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.361 [2024-11-19 17:19:14.642639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.361 [2024-11-19 17:19:14.642660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:22.361 [2024-11-19 17:19:14.646555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.361 [2024-11-19 17:19:14.646624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.361 [2024-11-19 17:19:14.646646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:22.361 [2024-11-19 17:19:14.650538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.361 [2024-11-19 17:19:14.650624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.361 [2024-11-19 17:19:14.650645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:22.361 [2024-11-19 17:19:14.654572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.361 [2024-11-19 17:19:14.654643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.361 [2024-11-19 17:19:14.654664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:22.361 [2024-11-19 17:19:14.658714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.361 [2024-11-19 17:19:14.658807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.361 [2024-11-19 17:19:14.658829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:22.361 [2024-11-19 17:19:14.662734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.361 [2024-11-19 17:19:14.662840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.361 [2024-11-19 17:19:14.662862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:22.361 [2024-11-19 17:19:14.666817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.361 [2024-11-19 17:19:14.666901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.361 [2024-11-19 17:19:14.666922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:22.361 [2024-11-19 17:19:14.670905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.361 [2024-11-19 17:19:14.670993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.361 [2024-11-19 17:19:14.671014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:22.361 [2024-11-19 17:19:14.674909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.361 [2024-11-19 17:19:14.674982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.361 [2024-11-19 17:19:14.675003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:22.361 [2024-11-19 17:19:14.678874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.361 [2024-11-19 17:19:14.678954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.361 [2024-11-19 17:19:14.678975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:22.361 [2024-11-19 17:19:14.682891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.361 [2024-11-19 17:19:14.682968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.361 [2024-11-19 17:19:14.682989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:22.362 [2024-11-19 17:19:14.686862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.362 [2024-11-19 17:19:14.686944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.362 [2024-11-19 17:19:14.686966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:22.362 [2024-11-19 17:19:14.690836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.362 [2024-11-19 17:19:14.690912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.362 [2024-11-19 17:19:14.690932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:22.362 [2024-11-19 17:19:14.694837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.362 [2024-11-19 17:19:14.694916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.362 [2024-11-19 17:19:14.694938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:22.362 [2024-11-19 17:19:14.698756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.362 [2024-11-19 17:19:14.698829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.362 [2024-11-19 17:19:14.698850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:22.362 [2024-11-19 17:19:14.702727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.362 [2024-11-19 17:19:14.702840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.362 [2024-11-19 17:19:14.702862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:22.362 [2024-11-19 17:19:14.706761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.362 [2024-11-19 17:19:14.706833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.362 [2024-11-19 17:19:14.706854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:22.362 [2024-11-19 17:19:14.710716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.362 [2024-11-19 17:19:14.710827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.362 [2024-11-19 17:19:14.710850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:22.362 [2024-11-19 17:19:14.714717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.362 [2024-11-19 17:19:14.714808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.362 [2024-11-19 17:19:14.714830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:22.362 [2024-11-19 17:19:14.718604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.362 [2024-11-19 17:19:14.718689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.362 [2024-11-19 17:19:14.718710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:22.362 [2024-11-19 17:19:14.722600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.362 [2024-11-19 17:19:14.722687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.362 [2024-11-19 17:19:14.722707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:22.362 [2024-11-19 17:19:14.726540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.362 [2024-11-19 17:19:14.726611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.362 [2024-11-19 17:19:14.726632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:22.362 [2024-11-19 17:19:14.730530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.362 [2024-11-19 17:19:14.730607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.362 [2024-11-19 17:19:14.730628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:22.362 [2024-11-19 17:19:14.734459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.362 [2024-11-19 17:19:14.734529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.362 [2024-11-19 17:19:14.734550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:22.362 [2024-11-19 17:19:14.738529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.362 [2024-11-19 17:19:14.738609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.362 [2024-11-19 17:19:14.738630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:22.362 [2024-11-19 17:19:14.742516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.362 [2024-11-19 17:19:14.742588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.362 [2024-11-19 17:19:14.742609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:22.362 7664.50 IOPS, 958.06 MiB/s [2024-11-19T17:19:14.895Z] [2024-11-19 17:19:14.747392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa5c180) with pdu=0x2000166ff3c8 00:20:22.362 [2024-11-19 17:19:14.747453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.362 [2024-11-19 17:19:14.747473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:22.362 00:20:22.362 Latency(us) 00:20:22.362 [2024-11-19T17:19:14.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.362 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:22.362 nvme0n1 : 2.00 7662.26 957.78 0.00 0.00 2082.39 1414.98 5481.19 00:20:22.362 [2024-11-19T17:19:14.895Z] =================================================================================================================== 00:20:22.362 [2024-11-19T17:19:14.895Z] Total : 7662.26 957.78 0.00 0.00 2082.39 1414.98 5481.19 00:20:22.362 { 00:20:22.362 "results": [ 00:20:22.362 { 00:20:22.362 "job": "nvme0n1", 00:20:22.362 "core_mask": "0x2", 00:20:22.362 "workload": "randwrite", 00:20:22.362 "status": "finished", 00:20:22.362 "queue_depth": 16, 00:20:22.362 "io_size": 131072, 00:20:22.362 "runtime": 2.003196, 00:20:22.362 "iops": 7662.255715366844, 00:20:22.362 "mibps": 957.7819644208555, 00:20:22.362 "io_failed": 0, 00:20:22.362 "io_timeout": 0, 00:20:22.362 "avg_latency_us": 2082.3880207771904, 00:20:22.362 "min_latency_us": 1414.9818181818182, 00:20:22.362 "max_latency_us": 5481.192727272727 00:20:22.362 } 00:20:22.362 ], 00:20:22.362 "core_count": 1 00:20:22.362 } 00:20:22.362 17:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:22.362 17:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:22.362 17:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:22.362 17:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:22.362 | .driver_specific 00:20:22.362 | .nvme_error 00:20:22.362 | .status_code 00:20:22.362 | .command_transient_transport_error' 00:20:22.621 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 496 > 0 )) 00:20:22.621 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94783 00:20:22.621 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94783 ']' 00:20:22.621 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94783 00:20:22.621 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:22.621 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:22.621 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94783 00:20:22.621 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:22.621 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:22.621 killing process with pid 94783 00:20:22.621 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94783' 00:20:22.621 Received shutdown signal, test time was about 2.000000 seconds 00:20:22.621 00:20:22.621 Latency(us) 00:20:22.621 [2024-11-19T17:19:15.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.621 [2024-11-19T17:19:15.154Z] =================================================================================================================== 00:20:22.621 [2024-11-19T17:19:15.154Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:22.621 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94783 00:20:22.621 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94783 00:20:22.880 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 94500 00:20:22.880 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94500 ']' 00:20:22.880 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94500 00:20:22.880 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:22.880 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:22.880 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94500 00:20:22.880 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:22.880 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:22.880 killing process with pid 94500 00:20:22.880 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94500' 00:20:22.880 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94500 00:20:22.880 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94500 00:20:23.138 00:20:23.138 real 0m17.568s 00:20:23.138 user 0m33.109s 00:20:23.138 sys 0m4.893s 00:20:23.138 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:23.138 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:23.138 ************************************ 00:20:23.138 END TEST nvmf_digest_error 00:20:23.138 ************************************ 00:20:23.138 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:23.138 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:20:23.138 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:23.138 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:23.396 rmmod nvme_tcp 00:20:23.396 rmmod nvme_fabrics 00:20:23.396 rmmod nvme_keyring 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 94500 ']' 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 94500 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 94500 ']' 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 94500 00:20:23.396 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (94500) - No such process 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 94500 is not found' 00:20:23.396 Process with pid 94500 is not found 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:23.396 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:23.655 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:23.655 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:23.655 17:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:23.655 17:19:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:23.655 17:19:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.655 17:19:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:23.655 17:19:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.655 17:19:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:20:23.655 00:20:23.655 real 0m34.324s 00:20:23.655 user 1m2.489s 00:20:23.655 sys 0m10.413s 00:20:23.655 17:19:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:23.655 ************************************ 00:20:23.655 END TEST nvmf_digest 00:20:23.655 ************************************ 00:20:23.655 17:19:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:23.655 17:19:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]] 00:20:23.655 17:19:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]] 00:20:23.655 17:19:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:20:23.655 17:19:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:23.655 17:19:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:23.655 17:19:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.655 ************************************ 00:20:23.655 START TEST nvmf_mdns_discovery 00:20:23.655 ************************************ 00:20:23.655 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:20:23.655 * Looking for test storage... 00:20:23.655 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:23.655 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@344 -- # case "$op" in 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@345 -- # : 1 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # decimal 1 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=1 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 1 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # decimal 2 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=2 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 2 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # return 0 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:23.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.914 --rc genhtml_branch_coverage=1 00:20:23.914 --rc genhtml_function_coverage=1 00:20:23.914 --rc genhtml_legend=1 00:20:23.914 --rc geninfo_all_blocks=1 00:20:23.914 --rc geninfo_unexecuted_blocks=1 00:20:23.914 00:20:23.914 ' 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:23.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.914 --rc genhtml_branch_coverage=1 00:20:23.914 --rc genhtml_function_coverage=1 00:20:23.914 --rc genhtml_legend=1 00:20:23.914 --rc geninfo_all_blocks=1 00:20:23.914 --rc geninfo_unexecuted_blocks=1 00:20:23.914 00:20:23.914 ' 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:23.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.914 --rc genhtml_branch_coverage=1 00:20:23.914 --rc genhtml_function_coverage=1 00:20:23.914 --rc genhtml_legend=1 00:20:23.914 --rc geninfo_all_blocks=1 00:20:23.914 --rc geninfo_unexecuted_blocks=1 00:20:23.914 00:20:23.914 ' 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:23.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.914 --rc genhtml_branch_coverage=1 00:20:23.914 --rc genhtml_function_coverage=1 00:20:23.914 --rc genhtml_legend=1 00:20:23.914 --rc geninfo_all_blocks=1 00:20:23.914 --rc geninfo_unexecuted_blocks=1 00:20:23.914 00:20:23.914 ' 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:23.914 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # : 0 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:23.915 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:23.915 Cannot find device "nvmf_init_br" 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:23.915 Cannot find device "nvmf_init_br2" 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:23.915 Cannot find device "nvmf_tgt_br" 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # true 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:23.915 Cannot find device "nvmf_tgt_br2" 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # true 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:23.915 Cannot find device "nvmf_init_br" 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # true 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:23.915 Cannot find device "nvmf_init_br2" 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # true 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:23.915 Cannot find device "nvmf_tgt_br" 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # true 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:23.915 Cannot find device "nvmf_tgt_br2" 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # true 00:20:23.915 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:23.915 Cannot find device "nvmf_br" 00:20:23.916 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # true 00:20:23.916 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:23.916 Cannot find device "nvmf_init_if" 00:20:23.916 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # true 00:20:23.916 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:23.916 Cannot find device "nvmf_init_if2" 00:20:23.916 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # true 00:20:23.916 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:24.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # true 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:24.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # true 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:24.175 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:24.175 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:20:24.175 00:20:24.175 --- 10.0.0.3 ping statistics --- 00:20:24.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.175 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:24.175 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:24.175 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:20:24.175 00:20:24.175 --- 10.0.0.4 ping statistics --- 00:20:24.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.175 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:24.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:24.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:20:24.175 00:20:24.175 --- 10.0.0.1 ping statistics --- 00:20:24.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.175 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:24.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:24.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:20:24.175 00:20:24.175 --- 10.0.0.2 ping statistics --- 00:20:24.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.175 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@461 -- # return 0 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@509 -- # nvmfpid=95136 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@510 -- # waitforlisten 95136 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 95136 ']' 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.175 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:24.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.176 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.176 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:24.176 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:24.176 17:19:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.434 [2024-11-19 17:19:16.758070] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:20:24.434 [2024-11-19 17:19:16.758157] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.434 [2024-11-19 17:19:16.912621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.434 [2024-11-19 17:19:16.963535] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.434 [2024-11-19 17:19:16.963604] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.434 [2024-11-19 17:19:16.963617] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.434 [2024-11-19 17:19:16.963628] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.434 [2024-11-19 17:19:16.963643] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.434 [2024-11-19 17:19:16.964156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.694 [2024-11-19 17:19:17.168933] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.694 [2024-11-19 17:19:17.177087] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.694 null0 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.694 null1 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.694 null2 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.694 null3 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=95173 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 95173 /tmp/host.sock 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 95173 ']' 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:24.694 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:24.694 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.953 [2024-11-19 17:19:17.315422] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:20:24.953 [2024-11-19 17:19:17.315548] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95173 ] 00:20:24.953 [2024-11-19 17:19:17.464265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.211 [2024-11-19 17:19:17.512479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.211 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:25.211 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:20:25.211 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:20:25.211 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:20:25.211 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:20:25.470 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=95189 00:20:25.470 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:20:25.470 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:20:25.470 17:19:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:20:25.470 Process 1059 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:20:25.470 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:20:25.470 Successfully dropped root privileges. 00:20:25.470 avahi-daemon 0.8 starting up. 00:20:25.470 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:20:25.470 Successfully called chroot(). 00:20:25.470 Successfully dropped remaining capabilities. 00:20:25.470 No service file found in /etc/avahi/services. 00:20:25.470 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:20:25.470 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:20:25.470 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:20:25.470 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:20:25.470 Network interface enumeration completed. 00:20:25.470 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:20:25.470 Registering new address record for 10.0.0.4 on nvmf_tgt_if2.IPv4. 00:20:25.470 Registering new address record for fe80::8d1:deff:fe1b:a1b6 on nvmf_tgt_if.*. 00:20:25.470 Registering new address record for 10.0.0.3 on nvmf_tgt_if.IPv4. 00:20:26.405 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 3017827958. 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@114 -- # notify_id=0 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # get_subsystem_names 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # [[ '' == '' ]] 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # get_bdev_list 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # [[ '' == '' ]] 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@123 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # get_subsystem_names 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:26.405 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.664 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # [[ '' == '' ]] 00:20:26.664 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # get_bdev_list 00:20:26.664 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:26.664 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:26.664 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.664 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.664 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:26.664 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:26.664 17:19:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # [[ '' == '' ]] 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_subsystem_names 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ '' == '' ]] 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_bdev_list 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.664 [2024-11-19 17:19:19.080060] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ '' == '' ]] 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.664 [2024-11-19 17:19:19.129470] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@140 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@145 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_publish_mdns_prr 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.664 17:19:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 5 00:20:27.597 [2024-11-19 17:19:19.980066] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:20:27.855 [2024-11-19 17:19:20.380103] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:27.855 [2024-11-19 17:19:20.380165] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:20:27.855 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:27.855 cookie is 0 00:20:27.855 is_local: 1 00:20:27.855 our_own: 0 00:20:27.855 wide_area: 0 00:20:27.855 multicast: 1 00:20:27.855 cached: 1 00:20:28.113 [2024-11-19 17:19:20.480071] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:28.113 [2024-11-19 17:19:20.480094] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:20:28.113 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:28.113 cookie is 0 00:20:28.113 is_local: 1 00:20:28.113 our_own: 0 00:20:28.113 wide_area: 0 00:20:28.113 multicast: 1 00:20:28.113 cached: 1 00:20:29.046 [2024-11-19 17:19:21.380885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:29.046 [2024-11-19 17:19:21.380931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d3710 with addr=10.0.0.4, port=8009 00:20:29.046 [2024-11-19 17:19:21.380973] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:29.046 [2024-11-19 17:19:21.380986] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:29.046 [2024-11-19 17:19:21.381001] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:20:29.046 [2024-11-19 17:19:21.493462] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:29.046 [2024-11-19 17:19:21.493503] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:29.046 [2024-11-19 17:19:21.493521] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:29.304 [2024-11-19 17:19:21.579565] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem mdns1_nvme0 00:20:29.304 [2024-11-19 17:19:21.633929] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:20:29.304 [2024-11-19 17:19:21.634690] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1208e70:1 started. 00:20:29.304 [2024-11-19 17:19:21.636581] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:20:29.304 [2024-11-19 17:19:21.636621] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:29.304 [2024-11-19 17:19:21.641772] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1208e70 was disconnected and freed. delete nvme_qpair. 00:20:29.871 [2024-11-19 17:19:22.380693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:29.871 [2024-11-19 17:19:22.380757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1341b30 with addr=10.0.0.4, port=8009 00:20:29.871 [2024-11-19 17:19:22.380773] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:29.871 [2024-11-19 17:19:22.380780] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:29.871 [2024-11-19 17:19:22.380787] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:20:31.245 [2024-11-19 17:19:23.380684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:31.245 [2024-11-19 17:19:23.380748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d30d0 with addr=10.0.0.4, port=8009 00:20:31.245 [2024-11-19 17:19:23.380764] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:31.245 [2024-11-19 17:19:23.380772] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:31.245 [2024-11-19 17:19:23.380780] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:20:31.811 17:19:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 'not found' 00:20:31.811 17:19:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:20:31.811 17:19:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:20:31.811 17:19:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:20:31.811 17:19:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:20:31.811 17:19:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:20:31.811 17:19:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:20:31.811 17:19:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:20:31.811 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:20:31.811 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:31.811 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:20:31.811 17:19:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:20:31.811 17:19:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:31.811 17:19:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:20:31.811 17:19:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:31.811 17:19:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:20:31.811 17:19:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:31.811 17:19:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:20:31.811 17:19:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:31.811 17:19:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:20:31.811 17:19:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:20:31.811 17:19:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:20:31.811 17:19:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.4 -s 8009 00:20:31.811 17:19:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.811 17:19:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.811 [2024-11-19 17:19:24.219145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 8009 *** 00:20:31.811 [2024-11-19 17:19:24.221834] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:31.811 [2024-11-19 17:19:24.221862] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:31.811 17:19:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.811 17:19:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:20:31.811 17:19:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.811 17:19:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.811 [2024-11-19 17:19:24.227115] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:20:31.811 [2024-11-19 17:19:24.227852] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:31.811 17:19:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.811 17:19:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@157 -- # sleep 1 00:20:32.070 [2024-11-19 17:19:24.358923] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:32.070 [2024-11-19 17:19:24.358971] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:32.070 [2024-11-19 17:19:24.383325] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:20:32.070 [2024-11-19 17:19:24.383347] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:20:32.070 [2024-11-19 17:19:24.383377] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:20:32.070 [2024-11-19 17:19:24.445466] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:32.070 [2024-11-19 17:19:24.469407] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 new subsystem mdns0_nvme0 00:20:32.070 [2024-11-19 17:19:24.523675] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr was created to 10.0.0.4:4420 00:20:32.070 [2024-11-19 17:19:24.524166] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x1205d90:1 started. 00:20:32.070 [2024-11-19 17:19:24.525498] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:20:32.070 [2024-11-19 17:19:24.525521] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:20:32.070 [2024-11-19 17:19:24.532041] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x1205d90 was disconnected and freed. delete nvme_qpair. 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 found 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:20:33.003 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:20:33.003 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:20:33.003 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:20:33.003 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:33.003 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:33.003 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:33.003 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\4* ]] 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # get_mdns_discovery_svcs 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # [[ mdns == \m\d\n\s ]] 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # get_discovery_ctrlrs 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:33.003 [2024-11-19 17:19:25.380108] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:33.003 [2024-11-19 17:19:25.380143] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:20:33.003 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:33.003 cookie is 0 00:20:33.003 is_local: 1 00:20:33.003 our_own: 0 00:20:33.003 wide_area: 0 00:20:33.003 multicast: 1 00:20:33.003 cached: 1 00:20:33.003 [2024-11-19 17:19:25.380161] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:20:33.003 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.004 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:20:33.004 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:20:33.004 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:33.004 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.004 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:33.004 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.004 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:33.004 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:33.004 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.004 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:20:33.004 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:20:33.004 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:20:33.004 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.004 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.004 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:33.004 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:33.004 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:33.004 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.004 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4420 == \4\4\2\0 ]] 00:20:33.004 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:20:33.004 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:20:33.004 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:33.004 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.004 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:33.004 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.004 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:33.262 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.262 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4420 == \4\4\2\0 ]] 00:20:33.262 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:20:33.262 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:20:33.262 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:33.262 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.262 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.262 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.262 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:20:33.262 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=2 00:20:33.262 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 2 == 2 ]] 00:20:33.262 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:20:33.262 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.262 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.262 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.262 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@173 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:20:33.262 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.262 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.262 [2024-11-19 17:19:25.636853] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x120dfc0:1 started. 00:20:33.262 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.262 17:19:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # sleep 1 00:20:33.262 [2024-11-19 17:19:25.642283] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x120dfc0 was disconnected and freed. delete nvme_qpair. 00:20:33.262 [2024-11-19 17:19:25.644993] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x1204720:1 started. 00:20:33.262 [2024-11-19 17:19:25.652215] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x1204720 was disconnected and freed. delete nvme_qpair. 00:20:33.262 [2024-11-19 17:19:25.680115] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:33.262 [2024-11-19 17:19:25.680135] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:20:33.262 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:33.262 cookie is 0 00:20:33.262 is_local: 1 00:20:33.262 our_own: 0 00:20:33.262 wide_area: 0 00:20:33.262 multicast: 1 00:20:33.262 cached: 1 00:20:33.262 [2024-11-19 17:19:25.680145] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:20:34.197 17:19:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:20:34.197 17:19:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:34.197 17:19:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:34.197 17:19:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.197 17:19:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:34.197 17:19:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.197 17:19:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:34.197 17:19:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.197 17:19:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:34.197 17:19:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:20:34.197 17:19:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:34.197 17:19:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:20:34.197 17:19:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.197 17:19:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.455 17:19:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.455 17:19:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:20:34.455 17:19:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:20:34.455 17:19:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 2 == 2 ]] 00:20:34.455 17:19:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:20:34.455 17:19:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.455 17:19:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.455 [2024-11-19 17:19:26.776168] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:34.455 [2024-11-19 17:19:26.776930] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:34.455 [2024-11-19 17:19:26.776960] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:34.455 [2024-11-19 17:19:26.776990] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:20:34.455 [2024-11-19 17:19:26.777002] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:20:34.455 17:19:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.455 17:19:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4421 00:20:34.455 17:19:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.455 17:19:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.455 [2024-11-19 17:19:26.784167] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4421 *** 00:20:34.455 [2024-11-19 17:19:26.784943] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:34.455 [2024-11-19 17:19:26.784997] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:20:34.455 17:19:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.455 17:19:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@184 -- # sleep 1 00:20:34.455 [2024-11-19 17:19:26.916041] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for mdns1_nvme0 00:20:34.455 [2024-11-19 17:19:26.916334] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new path for mdns0_nvme0 00:20:34.455 [2024-11-19 17:19:26.977338] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:20:34.455 [2024-11-19 17:19:26.977401] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:20:34.455 [2024-11-19 17:19:26.977411] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:34.455 [2024-11-19 17:19:26.977416] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:20:34.455 [2024-11-19 17:19:26.977432] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:34.455 [2024-11-19 17:19:26.977619] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 2] ctrlr was created to 10.0.0.4:4421 00:20:34.455 [2024-11-19 17:19:26.977643] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:20:34.455 [2024-11-19 17:19:26.977650] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:20:34.455 [2024-11-19 17:19:26.977655] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:20:34.455 [2024-11-19 17:19:26.977668] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:20:34.714 [2024-11-19 17:19:27.023145] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:34.714 [2024-11-19 17:19:27.023164] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:20:34.714 [2024-11-19 17:19:27.023217] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:20:34.714 [2024-11-19 17:19:27.023225] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:20:35.306 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_subsystem_names 00:20:35.306 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:35.306 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:35.306 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:35.306 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:35.306 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.306 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:35.306 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.564 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:20:35.564 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:20:35.564 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:35.564 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:35.564 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:35.564 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.564 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:35.564 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:35.564 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.564 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:35.564 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # get_subsystem_paths mdns0_nvme0 00:20:35.564 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:20:35.564 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.564 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:35.564 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:35.564 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:35.564 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:35.564 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.564 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:35.564 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # get_subsystem_paths mdns1_nvme0 00:20:35.564 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:20:35.564 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:35.564 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.564 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:35.564 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:35.564 17:19:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:35.564 17:19:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.564 17:19:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:35.564 17:19:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # get_notification_count 00:20:35.564 17:19:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:20:35.564 17:19:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:20:35.564 17:19:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.564 17:19:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:35.564 17:19:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.564 17:19:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:20:35.564 17:19:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:20:35.564 17:19:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ 0 == 0 ]] 00:20:35.564 17:19:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:35.564 17:19:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.564 17:19:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:35.564 [2024-11-19 17:19:28.093728] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:35.564 [2024-11-19 17:19:28.093783] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:35.564 [2024-11-19 17:19:28.093815] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:20:35.564 [2024-11-19 17:19:28.093826] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:20:35.826 [2024-11-19 17:19:28.096828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.826 [2024-11-19 17:19:28.096862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.826 [2024-11-19 17:19:28.096873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.826 [2024-11-19 17:19:28.096881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.826 [2024-11-19 17:19:28.096889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.826 [2024-11-19 17:19:28.096897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.826 [2024-11-19 17:19:28.096906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.826 [2024-11-19 17:19:28.096913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.826 [2024-11-19 17:19:28.096921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e57f0 is same with the state(6) to be set 00:20:35.826 17:19:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.826 17:19:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@196 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:20:35.826 17:19:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.826 17:19:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:35.826 [2024-11-19 17:19:28.105760] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:35.826 [2024-11-19 17:19:28.105817] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:20:35.826 [2024-11-19 17:19:28.106808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e57f0 (9): Bad file descriptor 00:20:35.826 17:19:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.826 17:19:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # sleep 1 00:20:35.826 [2024-11-19 17:19:28.114580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.826 [2024-11-19 17:19:28.114627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.826 [2024-11-19 17:19:28.114638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.826 [2024-11-19 17:19:28.114646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.826 [2024-11-19 17:19:28.114654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.826 [2024-11-19 17:19:28.114662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.826 [2024-11-19 17:19:28.114671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.826 [2024-11-19 17:19:28.114678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.826 [2024-11-19 17:19:28.114686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f34a0 is same with the state(6) to be set 00:20:35.826 [2024-11-19 17:19:28.116822] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:35.826 [2024-11-19 17:19:28.116862] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:35.826 [2024-11-19 17:19:28.116872] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:35.826 [2024-11-19 17:19:28.116877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:35.826 [2024-11-19 17:19:28.116903] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:35.826 [2024-11-19 17:19:28.116954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.826 [2024-11-19 17:19:28.116972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e57f0 with addr=10.0.0.3, port=4420 00:20:35.826 [2024-11-19 17:19:28.116981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e57f0 is same with the state(6) to be set 00:20:35.826 [2024-11-19 17:19:28.116996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e57f0 (9): Bad file descriptor 00:20:35.826 [2024-11-19 17:19:28.117009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:35.826 [2024-11-19 17:19:28.117018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:35.826 [2024-11-19 17:19:28.117026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:35.826 [2024-11-19 17:19:28.117034] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:35.826 [2024-11-19 17:19:28.117041] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:35.826 [2024-11-19 17:19:28.117045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:35.826 [2024-11-19 17:19:28.124550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f34a0 (9): Bad file descriptor 00:20:35.826 [2024-11-19 17:19:28.126910] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:35.826 [2024-11-19 17:19:28.126928] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:35.826 [2024-11-19 17:19:28.126949] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:35.826 [2024-11-19 17:19:28.126953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:35.826 [2024-11-19 17:19:28.126973] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:35.826 [2024-11-19 17:19:28.127019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.826 [2024-11-19 17:19:28.127036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e57f0 with addr=10.0.0.3, port=4420 00:20:35.826 [2024-11-19 17:19:28.127045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e57f0 is same with the state(6) to be set 00:20:35.826 [2024-11-19 17:19:28.127058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e57f0 (9): Bad file descriptor 00:20:35.826 [2024-11-19 17:19:28.127071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:35.826 [2024-11-19 17:19:28.127079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:35.826 [2024-11-19 17:19:28.127087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:35.826 [2024-11-19 17:19:28.127093] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:35.826 [2024-11-19 17:19:28.127098] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:35.826 [2024-11-19 17:19:28.127102] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:35.826 [2024-11-19 17:19:28.134556] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:20:35.826 [2024-11-19 17:19:28.134596] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:20:35.826 [2024-11-19 17:19:28.134601] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:20:35.826 [2024-11-19 17:19:28.134605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:20:35.826 [2024-11-19 17:19:28.134628] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:20:35.826 [2024-11-19 17:19:28.134673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.826 [2024-11-19 17:19:28.134690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f34a0 with addr=10.0.0.4, port=4420 00:20:35.826 [2024-11-19 17:19:28.134699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f34a0 is same with the state(6) to be set 00:20:35.826 [2024-11-19 17:19:28.134713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f34a0 (9): Bad file descriptor 00:20:35.826 [2024-11-19 17:19:28.134725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:20:35.826 [2024-11-19 17:19:28.134733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:20:35.826 [2024-11-19 17:19:28.134758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:20:35.826 [2024-11-19 17:19:28.134766] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:20:35.826 [2024-11-19 17:19:28.134771] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:20:35.826 [2024-11-19 17:19:28.134775] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:20:35.827 [2024-11-19 17:19:28.136982] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:35.827 [2024-11-19 17:19:28.137019] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:35.827 [2024-11-19 17:19:28.137024] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:35.827 [2024-11-19 17:19:28.137029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:35.827 [2024-11-19 17:19:28.137050] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:35.827 [2024-11-19 17:19:28.137092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.827 [2024-11-19 17:19:28.137109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e57f0 with addr=10.0.0.3, port=4420 00:20:35.827 [2024-11-19 17:19:28.137118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e57f0 is same with the state(6) to be set 00:20:35.827 [2024-11-19 17:19:28.137131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e57f0 (9): Bad file descriptor 00:20:35.827 [2024-11-19 17:19:28.137143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:35.827 [2024-11-19 17:19:28.137151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:35.827 [2024-11-19 17:19:28.137159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:35.827 [2024-11-19 17:19:28.137166] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:35.827 [2024-11-19 17:19:28.137170] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:35.827 [2024-11-19 17:19:28.137174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:35.827 [2024-11-19 17:19:28.144635] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:20:35.827 [2024-11-19 17:19:28.144657] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:20:35.827 [2024-11-19 17:19:28.144678] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:20:35.827 [2024-11-19 17:19:28.144682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:20:35.827 [2024-11-19 17:19:28.144703] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:20:35.827 [2024-11-19 17:19:28.144757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.827 [2024-11-19 17:19:28.144774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f34a0 with addr=10.0.0.4, port=4420 00:20:35.827 [2024-11-19 17:19:28.144784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f34a0 is same with the state(6) to be set 00:20:35.827 [2024-11-19 17:19:28.144797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f34a0 (9): Bad file descriptor 00:20:35.827 [2024-11-19 17:19:28.144826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:20:35.827 [2024-11-19 17:19:28.144836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:20:35.827 [2024-11-19 17:19:28.144844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:20:35.827 [2024-11-19 17:19:28.144851] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:20:35.827 [2024-11-19 17:19:28.144856] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:20:35.827 [2024-11-19 17:19:28.144860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:20:35.827 [2024-11-19 17:19:28.147057] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:35.827 [2024-11-19 17:19:28.147077] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:35.827 [2024-11-19 17:19:28.147098] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:35.827 [2024-11-19 17:19:28.147102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:35.827 [2024-11-19 17:19:28.147123] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:35.827 [2024-11-19 17:19:28.147166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.827 [2024-11-19 17:19:28.147182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e57f0 with addr=10.0.0.3, port=4420 00:20:35.827 [2024-11-19 17:19:28.147191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e57f0 is same with the state(6) to be set 00:20:35.827 [2024-11-19 17:19:28.147204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e57f0 (9): Bad file descriptor 00:20:35.827 [2024-11-19 17:19:28.147217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:35.827 [2024-11-19 17:19:28.147225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:35.827 [2024-11-19 17:19:28.147232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:35.827 [2024-11-19 17:19:28.147240] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:35.827 [2024-11-19 17:19:28.147244] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:35.827 [2024-11-19 17:19:28.147249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:35.827 [2024-11-19 17:19:28.154715] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:20:35.827 [2024-11-19 17:19:28.154766] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:20:35.827 [2024-11-19 17:19:28.154773] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:20:35.827 [2024-11-19 17:19:28.154778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:20:35.827 [2024-11-19 17:19:28.154800] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:20:35.827 [2024-11-19 17:19:28.154847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.827 [2024-11-19 17:19:28.154864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f34a0 with addr=10.0.0.4, port=4420 00:20:35.827 [2024-11-19 17:19:28.154873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f34a0 is same with the state(6) to be set 00:20:35.827 [2024-11-19 17:19:28.154902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f34a0 (9): Bad file descriptor 00:20:35.827 [2024-11-19 17:19:28.154916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:20:35.827 [2024-11-19 17:19:28.154924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:20:35.827 [2024-11-19 17:19:28.154932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:20:35.827 [2024-11-19 17:19:28.154939] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:20:35.827 [2024-11-19 17:19:28.154944] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:20:35.827 [2024-11-19 17:19:28.154948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:20:35.827 [2024-11-19 17:19:28.157132] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:35.827 [2024-11-19 17:19:28.157154] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:35.827 [2024-11-19 17:19:28.157175] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:35.827 [2024-11-19 17:19:28.157180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:35.827 [2024-11-19 17:19:28.157200] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:35.827 [2024-11-19 17:19:28.157243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.827 [2024-11-19 17:19:28.157259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e57f0 with addr=10.0.0.3, port=4420 00:20:35.827 [2024-11-19 17:19:28.157268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e57f0 is same with the state(6) to be set 00:20:35.827 [2024-11-19 17:19:28.157282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e57f0 (9): Bad file descriptor 00:20:35.827 [2024-11-19 17:19:28.157295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:35.827 [2024-11-19 17:19:28.157303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:35.827 [2024-11-19 17:19:28.157311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:35.827 [2024-11-19 17:19:28.157318] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:35.827 [2024-11-19 17:19:28.157323] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:35.827 [2024-11-19 17:19:28.157327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:35.827 [2024-11-19 17:19:28.164810] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:20:35.827 [2024-11-19 17:19:28.164832] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:20:35.827 [2024-11-19 17:19:28.164852] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:20:35.827 [2024-11-19 17:19:28.164856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:20:35.827 [2024-11-19 17:19:28.164877] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:20:35.827 [2024-11-19 17:19:28.164935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.827 [2024-11-19 17:19:28.164952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f34a0 with addr=10.0.0.4, port=4420 00:20:35.827 [2024-11-19 17:19:28.164961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f34a0 is same with the state(6) to be set 00:20:35.827 [2024-11-19 17:19:28.164975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f34a0 (9): Bad file descriptor 00:20:35.827 [2024-11-19 17:19:28.164987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:20:35.827 [2024-11-19 17:19:28.164995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:20:35.827 [2024-11-19 17:19:28.165003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:20:35.827 [2024-11-19 17:19:28.165010] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:20:35.828 [2024-11-19 17:19:28.165015] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:20:35.828 [2024-11-19 17:19:28.165019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:20:35.828 [2024-11-19 17:19:28.167208] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:35.828 [2024-11-19 17:19:28.167243] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:35.828 [2024-11-19 17:19:28.167248] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:35.828 [2024-11-19 17:19:28.167252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:35.828 [2024-11-19 17:19:28.167272] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:35.828 [2024-11-19 17:19:28.167313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.828 [2024-11-19 17:19:28.167329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e57f0 with addr=10.0.0.3, port=4420 00:20:35.828 [2024-11-19 17:19:28.167341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e57f0 is same with the state(6) to be set 00:20:35.828 [2024-11-19 17:19:28.167355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e57f0 (9): Bad file descriptor 00:20:35.828 [2024-11-19 17:19:28.167375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:35.828 [2024-11-19 17:19:28.167385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:35.828 [2024-11-19 17:19:28.167393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:35.828 [2024-11-19 17:19:28.167400] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:35.828 [2024-11-19 17:19:28.167405] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:35.828 [2024-11-19 17:19:28.167409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:35.828 [2024-11-19 17:19:28.174885] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:20:35.828 [2024-11-19 17:19:28.174924] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:20:35.828 [2024-11-19 17:19:28.174929] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:20:35.828 [2024-11-19 17:19:28.174933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:20:35.828 [2024-11-19 17:19:28.174955] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:20:35.828 [2024-11-19 17:19:28.174996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.828 [2024-11-19 17:19:28.175012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f34a0 with addr=10.0.0.4, port=4420 00:20:35.828 [2024-11-19 17:19:28.175021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f34a0 is same with the state(6) to be set 00:20:35.828 [2024-11-19 17:19:28.175035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f34a0 (9): Bad file descriptor 00:20:35.828 [2024-11-19 17:19:28.175047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:20:35.828 [2024-11-19 17:19:28.175056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:20:35.828 [2024-11-19 17:19:28.175064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:20:35.828 [2024-11-19 17:19:28.175071] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:20:35.828 [2024-11-19 17:19:28.175075] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:20:35.828 [2024-11-19 17:19:28.175079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:20:35.828 [2024-11-19 17:19:28.177280] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:35.828 [2024-11-19 17:19:28.177318] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:35.828 [2024-11-19 17:19:28.177323] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:35.828 [2024-11-19 17:19:28.177327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:35.828 [2024-11-19 17:19:28.177349] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:35.828 [2024-11-19 17:19:28.177390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.828 [2024-11-19 17:19:28.177406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e57f0 with addr=10.0.0.3, port=4420 00:20:35.828 [2024-11-19 17:19:28.177415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e57f0 is same with the state(6) to be set 00:20:35.828 [2024-11-19 17:19:28.177428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e57f0 (9): Bad file descriptor 00:20:35.828 [2024-11-19 17:19:28.177441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:35.828 [2024-11-19 17:19:28.177449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:35.828 [2024-11-19 17:19:28.177457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:35.828 [2024-11-19 17:19:28.177464] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:35.828 [2024-11-19 17:19:28.177469] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:35.828 [2024-11-19 17:19:28.177473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:35.828 [2024-11-19 17:19:28.184961] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:20:35.828 [2024-11-19 17:19:28.184983] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:20:35.828 [2024-11-19 17:19:28.185005] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:20:35.828 [2024-11-19 17:19:28.185009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:20:35.828 [2024-11-19 17:19:28.185029] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:20:35.828 [2024-11-19 17:19:28.185071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.828 [2024-11-19 17:19:28.185088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f34a0 with addr=10.0.0.4, port=4420 00:20:35.828 [2024-11-19 17:19:28.185096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f34a0 is same with the state(6) to be set 00:20:35.828 [2024-11-19 17:19:28.185109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f34a0 (9): Bad file descriptor 00:20:35.828 [2024-11-19 17:19:28.185121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:20:35.828 [2024-11-19 17:19:28.185129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:20:35.828 [2024-11-19 17:19:28.185137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:20:35.828 [2024-11-19 17:19:28.185144] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:20:35.828 [2024-11-19 17:19:28.185149] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:20:35.828 [2024-11-19 17:19:28.185153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:20:35.828 [2024-11-19 17:19:28.187356] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:35.828 [2024-11-19 17:19:28.187391] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:35.828 [2024-11-19 17:19:28.187396] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:35.828 [2024-11-19 17:19:28.187400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:35.828 [2024-11-19 17:19:28.187420] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:35.828 [2024-11-19 17:19:28.187469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.828 [2024-11-19 17:19:28.187485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e57f0 with addr=10.0.0.3, port=4420 00:20:35.828 [2024-11-19 17:19:28.187493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e57f0 is same with the state(6) to be set 00:20:35.828 [2024-11-19 17:19:28.187506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e57f0 (9): Bad file descriptor 00:20:35.828 [2024-11-19 17:19:28.187519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:35.828 [2024-11-19 17:19:28.187527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:35.828 [2024-11-19 17:19:28.187535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:35.828 [2024-11-19 17:19:28.187542] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:35.828 [2024-11-19 17:19:28.187547] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:35.828 [2024-11-19 17:19:28.187551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:35.828 [2024-11-19 17:19:28.195039] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:20:35.828 [2024-11-19 17:19:28.195081] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:20:35.828 [2024-11-19 17:19:28.195086] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:20:35.828 [2024-11-19 17:19:28.195091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:20:35.828 [2024-11-19 17:19:28.195114] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:20:35.828 [2024-11-19 17:19:28.195159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.828 [2024-11-19 17:19:28.195176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f34a0 with addr=10.0.0.4, port=4420 00:20:35.828 [2024-11-19 17:19:28.195185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f34a0 is same with the state(6) to be set 00:20:35.828 [2024-11-19 17:19:28.195199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f34a0 (9): Bad file descriptor 00:20:35.828 [2024-11-19 17:19:28.195212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:20:35.828 [2024-11-19 17:19:28.195220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:20:35.829 [2024-11-19 17:19:28.195228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:20:35.829 [2024-11-19 17:19:28.195235] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:20:35.829 [2024-11-19 17:19:28.195240] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:20:35.829 [2024-11-19 17:19:28.195244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:20:35.829 [2024-11-19 17:19:28.197429] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:35.829 [2024-11-19 17:19:28.197471] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:35.829 [2024-11-19 17:19:28.197476] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:35.829 [2024-11-19 17:19:28.197481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:35.829 [2024-11-19 17:19:28.197502] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:35.829 [2024-11-19 17:19:28.197547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.829 [2024-11-19 17:19:28.197564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e57f0 with addr=10.0.0.3, port=4420 00:20:35.829 [2024-11-19 17:19:28.197573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e57f0 is same with the state(6) to be set 00:20:35.829 [2024-11-19 17:19:28.197586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e57f0 (9): Bad file descriptor 00:20:35.829 [2024-11-19 17:19:28.197609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:35.829 [2024-11-19 17:19:28.197619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:35.829 [2024-11-19 17:19:28.197627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:35.829 [2024-11-19 17:19:28.197634] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:35.829 [2024-11-19 17:19:28.197639] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:35.829 [2024-11-19 17:19:28.197643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:35.829 [2024-11-19 17:19:28.205122] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:20:35.829 [2024-11-19 17:19:28.205160] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:20:35.829 [2024-11-19 17:19:28.205166] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:20:35.829 [2024-11-19 17:19:28.205170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:20:35.829 [2024-11-19 17:19:28.205192] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:20:35.829 [2024-11-19 17:19:28.205235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.829 [2024-11-19 17:19:28.205252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f34a0 with addr=10.0.0.4, port=4420 00:20:35.829 [2024-11-19 17:19:28.205261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f34a0 is same with the state(6) to be set 00:20:35.829 [2024-11-19 17:19:28.205280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f34a0 (9): Bad file descriptor 00:20:35.829 [2024-11-19 17:19:28.205292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:20:35.829 [2024-11-19 17:19:28.205301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:20:35.829 [2024-11-19 17:19:28.205309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:20:35.829 [2024-11-19 17:19:28.205316] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:20:35.829 [2024-11-19 17:19:28.205321] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:20:35.829 [2024-11-19 17:19:28.205325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:20:35.829 [2024-11-19 17:19:28.207510] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:35.829 [2024-11-19 17:19:28.207544] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:35.829 [2024-11-19 17:19:28.207550] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:35.829 [2024-11-19 17:19:28.207554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:35.829 [2024-11-19 17:19:28.207574] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:35.829 [2024-11-19 17:19:28.207614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.829 [2024-11-19 17:19:28.207630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e57f0 with addr=10.0.0.3, port=4420 00:20:35.829 [2024-11-19 17:19:28.207639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e57f0 is same with the state(6) to be set 00:20:35.829 [2024-11-19 17:19:28.207652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e57f0 (9): Bad file descriptor 00:20:35.829 [2024-11-19 17:19:28.207675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:35.829 [2024-11-19 17:19:28.207685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:35.829 [2024-11-19 17:19:28.207693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:35.829 [2024-11-19 17:19:28.207700] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:35.829 [2024-11-19 17:19:28.207705] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:35.829 [2024-11-19 17:19:28.207709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:35.829 [2024-11-19 17:19:28.215199] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:20:35.829 [2024-11-19 17:19:28.215236] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:20:35.829 [2024-11-19 17:19:28.215242] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:20:35.829 [2024-11-19 17:19:28.215246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:20:35.829 [2024-11-19 17:19:28.215268] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:20:35.829 [2024-11-19 17:19:28.215312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.829 [2024-11-19 17:19:28.215328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f34a0 with addr=10.0.0.4, port=4420 00:20:35.829 [2024-11-19 17:19:28.215337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f34a0 is same with the state(6) to be set 00:20:35.829 [2024-11-19 17:19:28.215350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f34a0 (9): Bad file descriptor 00:20:35.829 [2024-11-19 17:19:28.215363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:20:35.829 [2024-11-19 17:19:28.215370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:20:35.829 [2024-11-19 17:19:28.215379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:20:35.829 [2024-11-19 17:19:28.215386] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:20:35.829 [2024-11-19 17:19:28.215391] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:20:35.829 [2024-11-19 17:19:28.215395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:20:35.829 [2024-11-19 17:19:28.217582] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:35.829 [2024-11-19 17:19:28.217620] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:35.829 [2024-11-19 17:19:28.217626] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:35.829 [2024-11-19 17:19:28.217630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:35.829 [2024-11-19 17:19:28.217651] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:35.829 [2024-11-19 17:19:28.217695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.829 [2024-11-19 17:19:28.217711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e57f0 with addr=10.0.0.3, port=4420 00:20:35.829 [2024-11-19 17:19:28.217720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e57f0 is same with the state(6) to be set 00:20:35.829 [2024-11-19 17:19:28.217755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e57f0 (9): Bad file descriptor 00:20:35.829 [2024-11-19 17:19:28.217771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:35.829 [2024-11-19 17:19:28.217779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:35.829 [2024-11-19 17:19:28.217787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:35.829 [2024-11-19 17:19:28.217794] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:35.829 [2024-11-19 17:19:28.217799] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:35.829 [2024-11-19 17:19:28.217803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:35.829 [2024-11-19 17:19:28.225275] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:20:35.829 [2024-11-19 17:19:28.225313] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:20:35.829 [2024-11-19 17:19:28.225318] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:20:35.829 [2024-11-19 17:19:28.225322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:20:35.829 [2024-11-19 17:19:28.225344] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:20:35.829 [2024-11-19 17:19:28.225385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.829 [2024-11-19 17:19:28.225402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f34a0 with addr=10.0.0.4, port=4420 00:20:35.829 [2024-11-19 17:19:28.225410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f34a0 is same with the state(6) to be set 00:20:35.829 [2024-11-19 17:19:28.225423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f34a0 (9): Bad file descriptor 00:20:35.830 [2024-11-19 17:19:28.225437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:20:35.830 [2024-11-19 17:19:28.225444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:20:35.830 [2024-11-19 17:19:28.225452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:20:35.830 [2024-11-19 17:19:28.225459] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:20:35.830 [2024-11-19 17:19:28.225464] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:20:35.830 [2024-11-19 17:19:28.225468] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:20:35.830 [2024-11-19 17:19:28.227658] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:35.830 [2024-11-19 17:19:28.227692] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:35.830 [2024-11-19 17:19:28.227698] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:35.830 [2024-11-19 17:19:28.227702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:35.830 [2024-11-19 17:19:28.227722] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:35.830 [2024-11-19 17:19:28.227781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.830 [2024-11-19 17:19:28.227798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e57f0 with addr=10.0.0.3, port=4420 00:20:35.830 [2024-11-19 17:19:28.227808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e57f0 is same with the state(6) to be set 00:20:35.830 [2024-11-19 17:19:28.227821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e57f0 (9): Bad file descriptor 00:20:35.830 [2024-11-19 17:19:28.227833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:35.830 [2024-11-19 17:19:28.227841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:35.830 [2024-11-19 17:19:28.227849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:35.830 [2024-11-19 17:19:28.227856] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:35.830 [2024-11-19 17:19:28.227861] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:35.830 [2024-11-19 17:19:28.227865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:35.830 [2024-11-19 17:19:28.235352] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:20:35.830 [2024-11-19 17:19:28.235391] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:20:35.830 [2024-11-19 17:19:28.235396] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:20:35.830 [2024-11-19 17:19:28.235401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:20:35.830 [2024-11-19 17:19:28.235422] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:20:35.830 [2024-11-19 17:19:28.235464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.830 [2024-11-19 17:19:28.235480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f34a0 with addr=10.0.0.4, port=4420 00:20:35.830 [2024-11-19 17:19:28.235489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f34a0 is same with the state(6) to be set 00:20:35.830 [2024-11-19 17:19:28.235510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f34a0 (9): Bad file descriptor 00:20:35.830 [2024-11-19 17:19:28.235524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:20:35.830 [2024-11-19 17:19:28.235532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:20:35.830 [2024-11-19 17:19:28.235540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:20:35.830 [2024-11-19 17:19:28.235547] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:20:35.830 [2024-11-19 17:19:28.235552] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:20:35.830 [2024-11-19 17:19:28.235556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:20:35.830 [2024-11-19 17:19:28.237730] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:35.830 [2024-11-19 17:19:28.237776] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:35.830 [2024-11-19 17:19:28.237782] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:35.830 [2024-11-19 17:19:28.237786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:35.830 [2024-11-19 17:19:28.237807] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:35.830 [2024-11-19 17:19:28.237849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.830 [2024-11-19 17:19:28.237865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e57f0 with addr=10.0.0.3, port=4420 00:20:35.830 [2024-11-19 17:19:28.237874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e57f0 is same with the state(6) to be set 00:20:35.830 [2024-11-19 17:19:28.237887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e57f0 (9): Bad file descriptor 00:20:35.830 [2024-11-19 17:19:28.237909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:35.830 [2024-11-19 17:19:28.237918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:35.830 [2024-11-19 17:19:28.237926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:35.830 [2024-11-19 17:19:28.237933] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:35.830 [2024-11-19 17:19:28.237938] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:35.830 [2024-11-19 17:19:28.237942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:35.830 [2024-11-19 17:19:28.237971] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:20:35.830 [2024-11-19 17:19:28.238004] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:20:35.830 [2024-11-19 17:19:28.238035] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:35.830 [2024-11-19 17:19:28.238066] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 not found 00:20:35.830 [2024-11-19 17:19:28.238081] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:20:35.830 [2024-11-19 17:19:28.238093] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:20:35.830 [2024-11-19 17:19:28.324005] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:20:35.830 [2024-11-19 17:19:28.324096] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # get_subsystem_names 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # get_bdev_list 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # get_subsystem_paths mdns0_nvme0 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # [[ 4421 == \4\4\2\1 ]] 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # get_subsystem_paths mdns1_nvme0 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:36.765 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.023 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # [[ 4421 == \4\4\2\1 ]] 00:20:37.023 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # get_notification_count 00:20:37.023 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:20:37.023 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.023 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.023 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:20:37.023 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.023 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:20:37.023 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:20:37.023 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # [[ 0 == 0 ]] 00:20:37.023 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@206 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:20:37.023 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.023 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.023 [2024-11-19 17:19:29.380132] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:20:37.023 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.023 17:19:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@207 -- # sleep 1 00:20:37.957 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # get_mdns_discovery_svcs 00:20:37.957 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:20:37.957 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:20:37.957 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:20:37.957 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.958 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:20:37.958 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.958 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.958 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # [[ '' == '' ]] 00:20:37.958 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # get_subsystem_names 00:20:37.958 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:37.958 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:37.958 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:37.958 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:37.958 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.958 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.958 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # [[ '' == '' ]] 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # get_bdev_list 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # [[ '' == '' ]] 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@212 -- # get_notification_count 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=4 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=8 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@213 -- # [[ 4 == 4 ]] 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@216 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@217 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.217 [2024-11-19 17:19:30.600076] bdev_mdns_client.c: 471:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:20:38.217 2024/11/19 17:19:30 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:20:38.217 request: 00:20:38.217 { 00:20:38.217 "method": "bdev_nvme_start_mdns_discovery", 00:20:38.217 "params": { 00:20:38.217 "name": "mdns", 00:20:38.217 "svcname": "_nvme-disc._http", 00:20:38.217 "hostnqn": "nqn.2021-12.io.spdk:test" 00:20:38.217 } 00:20:38.217 } 00:20:38.217 Got JSON-RPC error response 00:20:38.217 GoRPCClient: error on JSON-RPC call 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:38.217 17:19:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@218 -- # sleep 5 00:20:38.784 [2024-11-19 17:19:31.188637] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:20:38.784 [2024-11-19 17:19:31.288633] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:20:39.042 [2024-11-19 17:19:31.388639] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:39.042 [2024-11-19 17:19:31.388660] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:20:39.042 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:39.042 cookie is 0 00:20:39.043 is_local: 1 00:20:39.043 our_own: 0 00:20:39.043 wide_area: 0 00:20:39.043 multicast: 1 00:20:39.043 cached: 1 00:20:39.043 [2024-11-19 17:19:31.488640] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:39.043 [2024-11-19 17:19:31.488664] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:20:39.043 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:39.043 cookie is 0 00:20:39.043 is_local: 1 00:20:39.043 our_own: 0 00:20:39.043 wide_area: 0 00:20:39.043 multicast: 1 00:20:39.043 cached: 1 00:20:39.043 [2024-11-19 17:19:31.488691] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:20:39.302 [2024-11-19 17:19:31.588641] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:39.302 [2024-11-19 17:19:31.588664] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:20:39.302 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:39.302 cookie is 0 00:20:39.302 is_local: 1 00:20:39.302 our_own: 0 00:20:39.302 wide_area: 0 00:20:39.302 multicast: 1 00:20:39.302 cached: 1 00:20:39.302 [2024-11-19 17:19:31.688639] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:39.302 [2024-11-19 17:19:31.688662] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:20:39.302 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:39.302 cookie is 0 00:20:39.302 is_local: 1 00:20:39.302 our_own: 0 00:20:39.302 wide_area: 0 00:20:39.302 multicast: 1 00:20:39.302 cached: 1 00:20:39.302 [2024-11-19 17:19:31.688687] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:20:39.869 [2024-11-19 17:19:32.394800] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:20:39.869 [2024-11-19 17:19:32.394822] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:20:39.869 [2024-11-19 17:19:32.394855] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:20:40.127 [2024-11-19 17:19:32.480896] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new subsystem mdns0_nvme0 00:20:40.127 [2024-11-19 17:19:32.539205] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] ctrlr was created to 10.0.0.4:4421 00:20:40.127 [2024-11-19 17:19:32.539772] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] Connecting qpair 0x12167d0:1 started. 00:20:40.127 [2024-11-19 17:19:32.541474] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:20:40.127 [2024-11-19 17:19:32.541500] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:20:40.127 [2024-11-19 17:19:32.543491] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] qpair 0x12167d0 was disconnected and freed. delete nvme_qpair. 00:20:40.127 [2024-11-19 17:19:32.594507] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:40.127 [2024-11-19 17:19:32.594528] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:40.127 [2024-11-19 17:19:32.594559] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:40.386 [2024-11-19 17:19:32.680599] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem mdns1_nvme0 00:20:40.386 [2024-11-19 17:19:32.738864] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:20:40.386 [2024-11-19 17:19:32.739467] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1216160:1 started. 00:20:40.386 [2024-11-19 17:19:32.740657] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:20:40.386 [2024-11-19 17:19:32.740680] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:20:40.386 [2024-11-19 17:19:32.743418] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1216160 was disconnected and freed. delete nvme_qpair. 00:20:43.673 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # get_mdns_discovery_svcs 00:20:43.673 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:20:43.673 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:20:43.673 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:20:43.673 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.673 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.673 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:20:43.673 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.673 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # [[ mdns == \m\d\n\s ]] 00:20:43.673 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # get_discovery_ctrlrs 00:20:43.673 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:43.673 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:20:43.673 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:20:43.673 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # get_bdev_list 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@225 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.674 [2024-11-19 17:19:35.802361] bdev_mdns_client.c: 476:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:20:43.674 2024/11/19 17:19:35 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:20:43.674 request: 00:20:43.674 { 00:20:43.674 "method": "bdev_nvme_start_mdns_discovery", 00:20:43.674 "params": { 00:20:43.674 "name": "cdc", 00:20:43.674 "svcname": "_nvme-disc._tcp", 00:20:43.674 "hostnqn": "nqn.2021-12.io.spdk:test" 00:20:43.674 } 00:20:43.674 } 00:20:43.674 Got JSON-RPC error response 00:20:43.674 GoRPCClient: error on JSON-RPC call 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # get_discovery_ctrlrs 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # get_bdev_list 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@228 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@231 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 found 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:20:43.674 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:20:43.674 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:20:43.674 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:20:43.674 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:43.674 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:43.674 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:43.674 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:20:43.674 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:20:43.675 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:43.675 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:20:43.675 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:43.675 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:20:43.675 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:20:43.675 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:43.675 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:20:43.675 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:43.675 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:20:43.675 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:20:43.675 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:20:43.675 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:20:43.675 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:20:43.675 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@232 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:20:43.675 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.675 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.675 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.675 17:19:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@234 -- # sleep 1 00:20:43.675 [2024-11-19 17:19:35.988642] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:20:44.611 17:19:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@236 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 'not found' 00:20:44.611 17:19:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:20:44.611 17:19:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:20:44.611 17:19:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:20:44.611 17:19:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:20:44.611 17:19:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:20:44.611 17:19:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:20:44.611 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:20:44.611 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:20:44.611 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:44.611 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:20:44.611 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:20:44.611 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:44.611 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:20:44.611 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:44.611 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:20:44.611 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:44.611 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:20:44.611 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:44.611 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:20:44.611 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:20:44.611 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:20:44.611 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@238 -- # rpc_cmd nvmf_stop_mdns_prr 00:20:44.611 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.611 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.611 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.611 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@240 -- # trap - SIGINT SIGTERM EXIT 00:20:44.611 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@242 -- # kill 95173 00:20:44.611 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@245 -- # wait 95173 00:20:44.870 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@246 -- # kill 95189 00:20:44.870 Got SIGTERM, quitting. 00:20:44.870 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@247 -- # nvmftestfini 00:20:44.870 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:44.870 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:20:44.870 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # sync 00:20:44.870 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:20:44.870 avahi-daemon 0.8 exiting. 00:20:44.870 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:44.870 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set +e 00:20:44.870 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:44.870 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:44.870 rmmod nvme_tcp 00:20:44.870 rmmod nvme_fabrics 00:20:44.870 rmmod nvme_keyring 00:20:44.870 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:44.870 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@128 -- # set -e 00:20:44.870 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@129 -- # return 0 00:20:44.870 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@517 -- # '[' -n 95136 ']' 00:20:44.870 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@518 -- # killprocess 95136 00:20:44.870 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # '[' -z 95136 ']' 00:20:44.870 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # kill -0 95136 00:20:44.870 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # uname 00:20:44.870 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:44.870 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95136 00:20:44.870 killing process with pid 95136 00:20:44.870 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:44.870 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:44.871 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95136' 00:20:44.871 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@973 -- # kill 95136 00:20:44.871 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@978 -- # wait 95136 00:20:45.129 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:45.129 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:45.129 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:45.129 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@297 -- # iptr 00:20:45.129 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-save 00:20:45.129 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:45.129 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:20:45.129 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:45.129 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:45.129 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:45.388 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:45.388 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:45.388 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:45.388 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:45.388 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:45.388 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:45.389 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:45.389 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:45.389 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:45.389 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:45.389 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:45.389 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:45.389 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:45.389 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.389 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:45.389 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.389 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@300 -- # return 0 00:20:45.389 00:20:45.389 real 0m21.778s 00:20:45.389 user 0m42.496s 00:20:45.389 sys 0m2.106s 00:20:45.389 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:45.389 17:19:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:45.389 ************************************ 00:20:45.389 END TEST nvmf_mdns_discovery 00:20:45.389 ************************************ 00:20:45.649 17:19:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:20:45.649 17:19:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:45.649 17:19:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:45.649 17:19:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:45.649 17:19:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.649 ************************************ 00:20:45.649 START TEST nvmf_host_multipath 00:20:45.649 ************************************ 00:20:45.649 17:19:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:45.649 * Looking for test storage... 00:20:45.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:45.649 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:45.649 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:20:45.649 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:45.649 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:45.649 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:45.649 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:45.649 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:45.649 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:20:45.649 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:20:45.649 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:20:45.649 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:20:45.649 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:20:45.649 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:20:45.649 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:20:45.649 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:45.649 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:20:45.649 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:20:45.649 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:45.649 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:45.649 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:20:45.649 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:20:45.649 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:45.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.650 --rc genhtml_branch_coverage=1 00:20:45.650 --rc genhtml_function_coverage=1 00:20:45.650 --rc genhtml_legend=1 00:20:45.650 --rc geninfo_all_blocks=1 00:20:45.650 --rc geninfo_unexecuted_blocks=1 00:20:45.650 00:20:45.650 ' 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:45.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.650 --rc genhtml_branch_coverage=1 00:20:45.650 --rc genhtml_function_coverage=1 00:20:45.650 --rc genhtml_legend=1 00:20:45.650 --rc geninfo_all_blocks=1 00:20:45.650 --rc geninfo_unexecuted_blocks=1 00:20:45.650 00:20:45.650 ' 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:45.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.650 --rc genhtml_branch_coverage=1 00:20:45.650 --rc genhtml_function_coverage=1 00:20:45.650 --rc genhtml_legend=1 00:20:45.650 --rc geninfo_all_blocks=1 00:20:45.650 --rc geninfo_unexecuted_blocks=1 00:20:45.650 00:20:45.650 ' 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:45.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.650 --rc genhtml_branch_coverage=1 00:20:45.650 --rc genhtml_function_coverage=1 00:20:45.650 --rc genhtml_legend=1 00:20:45.650 --rc geninfo_all_blocks=1 00:20:45.650 --rc geninfo_unexecuted_blocks=1 00:20:45.650 00:20:45.650 ' 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.650 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:45.651 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:45.651 Cannot find device "nvmf_init_br" 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:45.651 Cannot find device "nvmf_init_br2" 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:45.651 Cannot find device "nvmf_tgt_br" 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:20:45.651 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:45.910 Cannot find device "nvmf_tgt_br2" 00:20:45.910 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:45.911 Cannot find device "nvmf_init_br" 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:45.911 Cannot find device "nvmf_init_br2" 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:45.911 Cannot find device "nvmf_tgt_br" 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:45.911 Cannot find device "nvmf_tgt_br2" 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:45.911 Cannot find device "nvmf_br" 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:45.911 Cannot find device "nvmf_init_if" 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:45.911 Cannot find device "nvmf_init_if2" 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:45.911 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:45.911 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:45.911 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:46.169 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:46.169 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:46.169 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:46.169 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:46.169 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:46.169 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:46.169 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:46.169 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:46.169 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:46.169 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:46.169 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:20:46.169 00:20:46.169 --- 10.0.0.3 ping statistics --- 00:20:46.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.169 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:20:46.169 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:46.169 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:46.169 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:20:46.169 00:20:46.169 --- 10.0.0.4 ping statistics --- 00:20:46.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.169 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:20:46.169 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:46.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:46.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:46.169 00:20:46.169 --- 10.0.0.1 ping statistics --- 00:20:46.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.169 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:46.169 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:46.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:46.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:20:46.169 00:20:46.169 --- 10.0.0.2 ping statistics --- 00:20:46.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.170 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:20:46.170 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:46.170 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:20:46.170 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:46.170 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:46.170 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:46.170 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:46.170 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:46.170 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:46.170 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:46.170 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:20:46.170 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:46.170 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:46.170 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:46.170 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=95844 00:20:46.170 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:46.170 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 95844 00:20:46.170 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 95844 ']' 00:20:46.170 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.170 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:46.170 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.170 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:46.170 17:19:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:46.170 [2024-11-19 17:19:38.601542] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:20:46.170 [2024-11-19 17:19:38.601644] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.428 [2024-11-19 17:19:38.754449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:46.428 [2024-11-19 17:19:38.824211] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.428 [2024-11-19 17:19:38.824280] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.428 [2024-11-19 17:19:38.824295] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.428 [2024-11-19 17:19:38.824306] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.428 [2024-11-19 17:19:38.824316] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.428 [2024-11-19 17:19:38.825755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.428 [2024-11-19 17:19:38.825769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.364 17:19:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.364 17:19:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:20:47.364 17:19:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:47.364 17:19:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:47.364 17:19:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:47.364 17:19:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.364 17:19:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=95844 00:20:47.364 17:19:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:47.364 [2024-11-19 17:19:39.849886] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.364 17:19:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:47.931 Malloc0 00:20:47.931 17:19:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:47.931 17:19:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:48.190 17:19:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:48.449 [2024-11-19 17:19:40.885905] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:48.449 17:19:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:48.707 [2024-11-19 17:19:41.113899] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:48.707 17:19:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=95942 00:20:48.707 17:19:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:48.707 17:19:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:48.707 17:19:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 95942 /var/tmp/bdevperf.sock 00:20:48.707 17:19:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 95942 ']' 00:20:48.707 17:19:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:48.707 17:19:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:48.707 17:19:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:48.707 17:19:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.707 17:19:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:49.643 17:19:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:49.643 17:19:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:20:49.643 17:19:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:49.901 17:19:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:50.468 Nvme0n1 00:20:50.468 17:19:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:50.727 Nvme0n1 00:20:50.727 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:20:50.727 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:51.661 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:20:51.661 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:51.919 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:52.177 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:20:52.177 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96035 00:20:52.177 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:52.177 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95844 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:58.741 17:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:58.741 17:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:58.741 17:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:58.741 17:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:58.741 Attaching 4 probes... 00:20:58.741 @path[10.0.0.3, 4421]: 19492 00:20:58.741 @path[10.0.0.3, 4421]: 20048 00:20:58.741 @path[10.0.0.3, 4421]: 20419 00:20:58.741 @path[10.0.0.3, 4421]: 19952 00:20:58.741 @path[10.0.0.3, 4421]: 20018 00:20:58.741 17:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:58.741 17:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:58.741 17:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:58.741 17:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:58.741 17:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:58.741 17:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:58.741 17:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96035 00:20:58.741 17:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:58.741 17:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:20:58.741 17:19:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:58.741 17:19:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:59.000 17:19:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:20:59.000 17:19:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96171 00:20:59.000 17:19:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95844 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:59.000 17:19:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:05.562 17:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:05.562 17:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:05.562 17:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:05.562 17:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:05.562 Attaching 4 probes... 00:21:05.562 @path[10.0.0.3, 4420]: 20713 00:21:05.562 @path[10.0.0.3, 4420]: 21621 00:21:05.562 @path[10.0.0.3, 4420]: 21407 00:21:05.562 @path[10.0.0.3, 4420]: 21256 00:21:05.562 @path[10.0.0.3, 4420]: 21476 00:21:05.562 17:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:05.562 17:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:05.562 17:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:05.562 17:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:05.562 17:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:05.562 17:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:05.562 17:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96171 00:21:05.562 17:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:05.562 17:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:21:05.562 17:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:05.562 17:19:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:05.821 17:19:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:21:05.821 17:19:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95844 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:05.821 17:19:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96297 00:21:05.821 17:19:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:12.384 17:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:12.384 17:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:12.384 17:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:12.384 17:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:12.384 Attaching 4 probes... 00:21:12.384 @path[10.0.0.3, 4421]: 15318 00:21:12.384 @path[10.0.0.3, 4421]: 18716 00:21:12.384 @path[10.0.0.3, 4421]: 19882 00:21:12.384 @path[10.0.0.3, 4421]: 18946 00:21:12.384 @path[10.0.0.3, 4421]: 19342 00:21:12.384 17:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:12.384 17:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:12.384 17:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:12.384 17:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:12.384 17:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:12.384 17:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:12.384 17:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96297 00:21:12.384 17:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:12.384 17:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:21:12.384 17:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:12.642 17:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:12.901 17:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:21:12.901 17:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96433 00:21:12.901 17:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:12.901 17:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95844 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:19.467 17:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:19.467 17:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:21:19.467 17:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:21:19.467 17:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:19.467 Attaching 4 probes... 00:21:19.467 00:21:19.467 00:21:19.467 00:21:19.467 00:21:19.467 00:21:19.467 17:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:19.467 17:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:19.467 17:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:19.467 17:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:21:19.467 17:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:21:19.467 17:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:21:19.467 17:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96433 00:21:19.467 17:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:19.467 17:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:21:19.467 17:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:19.467 17:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:19.725 17:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:21:19.725 17:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96568 00:21:19.726 17:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95844 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:19.726 17:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:26.285 17:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:26.285 17:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:26.285 17:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:26.285 17:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:26.285 Attaching 4 probes... 00:21:26.285 @path[10.0.0.3, 4421]: 19296 00:21:26.285 @path[10.0.0.3, 4421]: 19854 00:21:26.285 @path[10.0.0.3, 4421]: 19729 00:21:26.285 @path[10.0.0.3, 4421]: 19630 00:21:26.285 @path[10.0.0.3, 4421]: 19593 00:21:26.285 17:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:26.285 17:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:26.285 17:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:26.285 17:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:26.285 17:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:26.285 17:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:26.285 17:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96568 00:21:26.285 17:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:26.285 17:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:26.285 [2024-11-19 17:20:18.621707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.621809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.621822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.621830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.621839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.621847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.621856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.621864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.621872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.621880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.621888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.621895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.621903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.621911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.621918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.621926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.621934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.621941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.621949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.621956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.621965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.621972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.621980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.621988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.621995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.622002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.622010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.622018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.622027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.622035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.622042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 [2024-11-19 17:20:18.622050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7e90 is same with the state(6) to be set 00:21:26.285 17:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:21:27.218 17:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:21:27.218 17:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96698 00:21:27.218 17:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95844 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:27.218 17:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:33.848 17:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:33.848 17:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:33.848 17:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:33.848 17:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:33.848 Attaching 4 probes... 00:21:33.848 @path[10.0.0.3, 4420]: 20442 00:21:33.848 @path[10.0.0.3, 4420]: 21240 00:21:33.848 @path[10.0.0.3, 4420]: 20919 00:21:33.848 @path[10.0.0.3, 4420]: 20978 00:21:33.848 @path[10.0.0.3, 4420]: 20987 00:21:33.848 17:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:33.848 17:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:33.848 17:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:33.848 17:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:33.848 17:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:33.849 17:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:33.849 17:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96698 00:21:33.849 17:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:33.849 17:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:33.849 [2024-11-19 17:20:26.181199] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:33.849 17:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:34.107 17:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:21:40.673 17:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:21:40.673 17:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96896 00:21:40.673 17:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95844 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:40.673 17:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:45.976 17:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:45.976 17:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:46.234 17:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:46.234 17:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:46.234 Attaching 4 probes... 00:21:46.234 @path[10.0.0.3, 4421]: 19683 00:21:46.234 @path[10.0.0.3, 4421]: 19713 00:21:46.234 @path[10.0.0.3, 4421]: 19593 00:21:46.234 @path[10.0.0.3, 4421]: 19631 00:21:46.234 @path[10.0.0.3, 4421]: 19855 00:21:46.234 17:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:46.234 17:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:46.234 17:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:46.234 17:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:46.234 17:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:46.234 17:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:46.234 17:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96896 00:21:46.234 17:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:46.234 17:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 95942 00:21:46.234 17:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 95942 ']' 00:21:46.234 17:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 95942 00:21:46.234 17:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:21:46.234 17:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:46.234 17:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95942 00:21:46.234 17:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:46.234 17:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:46.234 killing process with pid 95942 00:21:46.234 17:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95942' 00:21:46.234 17:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 95942 00:21:46.234 17:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 95942 00:21:46.234 { 00:21:46.234 "results": [ 00:21:46.234 { 00:21:46.234 "job": "Nvme0n1", 00:21:46.234 "core_mask": "0x4", 00:21:46.234 "workload": "verify", 00:21:46.234 "status": "terminated", 00:21:46.234 "verify_range": { 00:21:46.234 "start": 0, 00:21:46.234 "length": 16384 00:21:46.234 }, 00:21:46.234 "queue_depth": 128, 00:21:46.234 "io_size": 4096, 00:21:46.234 "runtime": 55.558077, 00:21:46.234 "iops": 8663.762786462174, 00:21:46.234 "mibps": 33.842823384617866, 00:21:46.234 "io_failed": 0, 00:21:46.234 "io_timeout": 0, 00:21:46.234 "avg_latency_us": 14746.445515836216, 00:21:46.234 "min_latency_us": 983.04, 00:21:46.234 "max_latency_us": 7015926.69090909 00:21:46.234 } 00:21:46.234 ], 00:21:46.234 "core_count": 1 00:21:46.234 } 00:21:46.518 17:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 95942 00:21:46.518 17:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:46.518 [2024-11-19 17:19:41.199427] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:21:46.518 [2024-11-19 17:19:41.199548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95942 ] 00:21:46.518 [2024-11-19 17:19:41.355018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.518 [2024-11-19 17:19:41.421814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.518 Running I/O for 90 seconds... 00:21:46.518 11044.00 IOPS, 43.14 MiB/s [2024-11-19T17:20:39.051Z] 10857.50 IOPS, 42.41 MiB/s [2024-11-19T17:20:39.051Z] 10562.67 IOPS, 41.26 MiB/s [2024-11-19T17:20:39.051Z] 10430.25 IOPS, 40.74 MiB/s [2024-11-19T17:20:39.051Z] 10386.40 IOPS, 40.57 MiB/s [2024-11-19T17:20:39.051Z] 10319.67 IOPS, 40.31 MiB/s [2024-11-19T17:20:39.051Z] 10272.57 IOPS, 40.13 MiB/s [2024-11-19T17:20:39.051Z] 10236.38 IOPS, 39.99 MiB/s [2024-11-19T17:20:39.051Z] [2024-11-19 17:19:51.479012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.518 [2024-11-19 17:19:51.479076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:46.518 [2024-11-19 17:19:51.479147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.518 [2024-11-19 17:19:51.479169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:46.518 [2024-11-19 17:19:51.479193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.518 [2024-11-19 17:19:51.479208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:46.518 [2024-11-19 17:19:51.479244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.518 [2024-11-19 17:19:51.479260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:46.518 [2024-11-19 17:19:51.479281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.518 [2024-11-19 17:19:51.479296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:46.518 [2024-11-19 17:19:51.479317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.518 [2024-11-19 17:19:51.479332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:46.518 [2024-11-19 17:19:51.479361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.518 [2024-11-19 17:19:51.479376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:46.518 [2024-11-19 17:19:51.479408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.518 [2024-11-19 17:19:51.479423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:46.518 [2024-11-19 17:19:51.479443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.518 [2024-11-19 17:19:51.479458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:46.518 [2024-11-19 17:19:51.479478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.518 [2024-11-19 17:19:51.479493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:46.518 [2024-11-19 17:19:51.479540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.518 [2024-11-19 17:19:51.479557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:46.518 [2024-11-19 17:19:51.479577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.518 [2024-11-19 17:19:51.479591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:46.518 [2024-11-19 17:19:51.479626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.518 [2024-11-19 17:19:51.479640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:46.518 [2024-11-19 17:19:51.479661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.518 [2024-11-19 17:19:51.479675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:46.518 [2024-11-19 17:19:51.479694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-11-19 17:19:51.479720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.479741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-11-19 17:19:51.479788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.479809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-11-19 17:19:51.479824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.479863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-11-19 17:19:51.479881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.479903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-11-19 17:19:51.479919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.479940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-11-19 17:19:51.479956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.479993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-11-19 17:19:51.480148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.480174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-11-19 17:19:51.480191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.480225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-11-19 17:19:51.480424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.480471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-11-19 17:19:51.480502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.480523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-11-19 17:19:51.480538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.480558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-11-19 17:19:51.480573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.480593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-11-19 17:19:51.480608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.480628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-11-19 17:19:51.480643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.480679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-11-19 17:19:51.480709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.480729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-11-19 17:19:51.480744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.480796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-11-19 17:19:51.480811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.480832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-11-19 17:19:51.480865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.480888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-11-19 17:19:51.480904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.480928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-11-19 17:19:51.480944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.481379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-11-19 17:19:51.481415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.481441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-11-19 17:19:51.481458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.481479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-11-19 17:19:51.481495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.481515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-11-19 17:19:51.481529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.481549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-11-19 17:19:51.481563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.481584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-11-19 17:19:51.481598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.481618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-11-19 17:19:51.481632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.481652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.519 [2024-11-19 17:19:51.481667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.481686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.519 [2024-11-19 17:19:51.481701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.481720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.519 [2024-11-19 17:19:51.481735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.481771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.519 [2024-11-19 17:19:51.481805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.481840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.519 [2024-11-19 17:19:51.481861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.481884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.519 [2024-11-19 17:19:51.481912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.481936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.519 [2024-11-19 17:19:51.481952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.481973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.519 [2024-11-19 17:19:51.481989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.482010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.519 [2024-11-19 17:19:51.482025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.482046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.519 [2024-11-19 17:19:51.482061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.482101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.519 [2024-11-19 17:19:51.482117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:46.519 [2024-11-19 17:19:51.482138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.519 [2024-11-19 17:19:51.482153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.482173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.482187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.482208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.482222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.482243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.482258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.482278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.482294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.482314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.482329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.482349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.482364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.482391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.482407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.482428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.482467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.482493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.482509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.482530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.482544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.482564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.482579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.482599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.482613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.482632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.482647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.482667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.482681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.482701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.482716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.482735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.482779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.482801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.482817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.482838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.482854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.482882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.482899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.482920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.482935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.482956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.482971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.482993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.483016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.483037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.483052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.483102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.483118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.483138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.483153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.483172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.483187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.483207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.483221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.483241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.483256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.483275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.483290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.483310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.483325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.483345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.483365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.483386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.483400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.483420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.483435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.483455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.483469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.483489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.520 [2024-11-19 17:19:51.483504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.483523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.520 [2024-11-19 17:19:51.483538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.483558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.520 [2024-11-19 17:19:51.483573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.483593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.520 [2024-11-19 17:19:51.483608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:46.520 [2024-11-19 17:19:51.483628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.483643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.483663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.483678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.483699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.483714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.483734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.483765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.483802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.483861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.483886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.483903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.483924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.483940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.483961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.483976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.483997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.484012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.484034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.484061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.484084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.484118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.484140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.484156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.484176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.484192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.484213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.484229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.484250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.484265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.484286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.484301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.484337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.484358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.484396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.484410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.484430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.484445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.484465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.484479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.484499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.484514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.484534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.484549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.484569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.484584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.484604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.484618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.484639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.484654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.484674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.484688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.484708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.484728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.484761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.484815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.484844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.484860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.484893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.484909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.484930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.484945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.484966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.484982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.485003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.485019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.485040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.485055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.485076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.485106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.485142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.485157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:51.486226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.521 [2024-11-19 17:19:51.486255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:46.521 10202.33 IOPS, 39.85 MiB/s [2024-11-19T17:20:39.054Z] 10252.60 IOPS, 40.05 MiB/s [2024-11-19T17:20:39.054Z] 10306.82 IOPS, 40.26 MiB/s [2024-11-19T17:20:39.054Z] 10339.50 IOPS, 40.39 MiB/s [2024-11-19T17:20:39.054Z] 10363.92 IOPS, 40.48 MiB/s [2024-11-19T17:20:39.054Z] 10392.29 IOPS, 40.59 MiB/s [2024-11-19T17:20:39.054Z] [2024-11-19 17:19:58.030390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.521 [2024-11-19 17:19:58.030451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:46.521 [2024-11-19 17:19:58.030502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.030520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.030540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.030554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.030573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.030656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.030683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:57192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.030699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.030720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.030735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.030788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.030804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.030846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.030871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.030894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.030910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.030931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.030948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.030971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.030987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.031009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.031025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.031047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.031063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.031099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:57264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.031145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.031164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:57272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.031178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.031212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:57280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.031227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.031257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:57288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.031273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.031292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:57296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.031322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.031341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:57304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.031355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.031376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:57312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.031390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.031409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:57320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.031424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.031443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:57328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.031457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.031475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:57336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.031489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.031508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.031523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.031542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:57352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.031556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.031575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:57360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.031590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.032027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.032069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.032098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.032118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.032153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:57384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.032171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.032194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.032210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.032232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.032248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.032271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.032288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.032311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.032342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.032392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.032407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.032427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.032441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:46.522 [2024-11-19 17:19:58.032477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.522 [2024-11-19 17:19:58.032491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.032510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.032524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.032543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.032557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.032575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.032590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.032609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.032623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.032641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.032662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.032682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.032696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.032715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.032729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.032764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.032796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.032819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.032853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.032878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.032896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.032918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.032934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.032957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.032973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.032996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.033012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.033039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.033055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.033077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.033094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.033147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.033177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.033197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.033219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.033239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.033254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.033274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.033288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.033308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.033322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.033342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.033357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.033376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.033391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.033410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.033425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.033445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.033460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.033480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.033494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.033514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.033529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.033548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.033562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.033582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.033596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.033617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.033631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.033657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.033672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.033694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.033709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.033729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.033759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.033798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.033826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.033852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.033869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.033891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.523 [2024-11-19 17:19:58.033907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.033930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.523 [2024-11-19 17:19:58.033946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.033969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.523 [2024-11-19 17:19:58.033985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:46.523 [2024-11-19 17:19:58.034007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.523 [2024-11-19 17:19:58.034023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.034046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:57040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.524 [2024-11-19 17:19:58.034062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.034085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.524 [2024-11-19 17:19:58.034117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.034168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:57056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.524 [2024-11-19 17:19:58.034182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.034209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:57064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.524 [2024-11-19 17:19:58.034224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.034244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.524 [2024-11-19 17:19:58.034259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.035628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.524 [2024-11-19 17:19:58.035655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.035679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.524 [2024-11-19 17:19:58.035695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.036833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.524 [2024-11-19 17:19:58.036864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.036893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.524 [2024-11-19 17:19:58.036912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.036935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.524 [2024-11-19 17:19:58.036951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.036974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.524 [2024-11-19 17:19:58.036990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.037012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.524 [2024-11-19 17:19:58.037028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.037050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.524 [2024-11-19 17:19:58.037067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.037089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.524 [2024-11-19 17:19:58.037106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.037158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.524 [2024-11-19 17:19:58.037204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.037224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.524 [2024-11-19 17:19:58.037251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.037274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.524 [2024-11-19 17:19:58.037289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.037326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.524 [2024-11-19 17:19:58.037347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.037368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:57184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.524 [2024-11-19 17:19:58.037383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.037404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:57192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.524 [2024-11-19 17:19:58.037419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.037440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.524 [2024-11-19 17:19:58.037455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.037476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.524 [2024-11-19 17:19:58.037507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.037545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.524 [2024-11-19 17:19:58.037561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.037583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.524 [2024-11-19 17:19:58.037599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.037621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.524 [2024-11-19 17:19:58.037637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.037659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.524 [2024-11-19 17:19:58.037676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.037698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.524 [2024-11-19 17:19:58.037714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.037737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.524 [2024-11-19 17:19:58.037760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.037783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:57264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.524 [2024-11-19 17:19:58.037799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.037822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.524 [2024-11-19 17:19:58.037838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.037877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.524 [2024-11-19 17:19:58.037896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.037918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.524 [2024-11-19 17:19:58.037934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.037957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.524 [2024-11-19 17:19:58.037973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.037995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:57304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.524 [2024-11-19 17:19:58.038011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.038033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.524 [2024-11-19 17:19:58.038049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.038072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.524 [2024-11-19 17:19:58.038088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.038110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.524 [2024-11-19 17:19:58.038127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.038163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:57336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.524 [2024-11-19 17:19:58.038179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:46.524 [2024-11-19 17:19:58.038216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.524 [2024-11-19 17:19:58.038243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.038264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.038279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.038729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:57360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.038757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.038785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.038802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.038825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.038872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.038913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.038930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.038981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.039010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.039030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.039044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.039064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.039079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.039098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.039112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.039132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.039147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.039167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.039181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.039200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.039215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.039235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.039249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.039279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.039295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.039314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.039329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.039349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.039363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.039383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.039397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.039417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.039431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.039450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.039465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.039484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.039498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.039518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.039532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.039552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.039581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.039600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.039613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.039632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.039647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.039665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.039679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.039698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.039718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.039738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.039752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.039771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.039786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.039805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.039832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.039853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.039868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.039887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.039901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.039920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.039934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.039953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.039967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.039986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.040000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.040019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.040032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.040082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.040101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.040124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.040140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.040162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.040187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:46.525 [2024-11-19 17:19:58.040210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.525 [2024-11-19 17:19:58.040226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.040248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.040264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.040287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.040303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.040324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.040340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.040391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.040406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.040426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:57384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.040440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.040460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.040488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.040507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:57400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.040521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.040540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.040554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.040573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.040587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.040606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:57424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.040620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.040655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:57432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.040669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.040711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.040727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.040747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.040762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.040782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.040796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.040816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.040847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.040871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.040886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.040906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.040921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.040941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.040956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.040991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.041005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.041040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.041054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.041073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.041087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.041105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.041119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.041138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.041152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.041194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.041211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.041230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.041244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.041263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.041277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.041296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.041310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.041329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.041343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.041363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.041377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.041396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.041410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.041428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.041442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.041461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.041475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.041494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.041508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.041526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.041540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.041560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.041574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.041593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.041613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.041633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.041647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.041666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.041680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:46.526 [2024-11-19 17:19:58.041699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.526 [2024-11-19 17:19:58.041713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.041732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.527 [2024-11-19 17:19:58.041746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.041777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.527 [2024-11-19 17:19:58.041795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.041815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.527 [2024-11-19 17:19:58.041830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.041849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.527 [2024-11-19 17:19:58.041863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.041882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.527 [2024-11-19 17:19:58.041896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.041915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.527 [2024-11-19 17:19:58.041929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.041948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.527 [2024-11-19 17:19:58.041961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.041980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.527 [2024-11-19 17:19:58.041995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.042014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.527 [2024-11-19 17:19:58.042034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.042054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.527 [2024-11-19 17:19:58.042069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.042087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:57032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.527 [2024-11-19 17:19:58.042102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.042121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.527 [2024-11-19 17:19:58.042135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.042154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:57048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.527 [2024-11-19 17:19:58.042167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.042187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.527 [2024-11-19 17:19:58.042201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.042221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.527 [2024-11-19 17:19:58.042235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.042254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.527 [2024-11-19 17:19:58.042268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.042288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.527 [2024-11-19 17:19:58.042302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.043456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.527 [2024-11-19 17:19:58.043485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.043511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.527 [2024-11-19 17:19:58.043528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.043547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.527 [2024-11-19 17:19:58.043562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.043582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.527 [2024-11-19 17:19:58.043597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.043628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.527 [2024-11-19 17:19:58.043644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.043664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.527 [2024-11-19 17:19:58.043678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.043698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.527 [2024-11-19 17:19:58.043712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.043746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.527 [2024-11-19 17:19:58.043760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.043779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.527 [2024-11-19 17:19:58.043793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.043826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.527 [2024-11-19 17:19:58.043845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.043865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.527 [2024-11-19 17:19:58.043879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.043899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.527 [2024-11-19 17:19:58.043913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.043932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.527 [2024-11-19 17:19:58.043946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.043966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.527 [2024-11-19 17:19:58.043980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.043999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.527 [2024-11-19 17:19:58.044013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.044032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.527 [2024-11-19 17:19:58.044075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.044125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.527 [2024-11-19 17:19:58.044143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:46.527 [2024-11-19 17:19:58.044166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.044182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.044203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.044219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.044242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.044258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.044280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:57248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.044295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.044318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:57256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.044334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.044371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:57264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.044401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.044436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:57272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.044450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.044470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:57280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.044499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.044518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:57288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.044532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.044551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:57296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.044565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.044585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:57304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.044600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.044619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:57312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.044639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.044659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:57320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.044673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.044692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:57328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.044706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.044725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:57336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.044739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.044759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.044773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.045135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.045159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.045183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.045199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.045219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.045233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.045252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.045266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.045284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.045299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.045318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.045332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.045351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.045365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.045384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.045408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.045429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.045444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.045463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.045476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.045495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.045509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.045529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.045543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.045562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.045576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.045595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.045609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.045627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.045641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.045660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.045674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.045693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.045707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.045725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.045754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.045777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.045792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.045811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.045825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.045852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.045867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.045886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.045900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.045918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.528 [2024-11-19 17:19:58.045932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:46.528 [2024-11-19 17:19:58.045951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.045965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.045984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.045998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.046016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.046030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.046049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.046063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.046082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.046096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.046114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.046128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.046147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.046160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.046179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.046193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.046212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.046226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.046251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.046265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.046284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.046298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.046317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.046331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.046350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.046364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.046383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.046397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.046416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.046430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.046449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.046463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.046482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.046495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.046514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.046528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.046546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.046560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.046579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:57376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.046614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.046664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.046683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.046705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.046729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.046751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:57400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.046778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.046805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.046821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.046841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.046856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.046877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:57424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.046891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.046912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:57432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.046927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.046947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.046977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.047012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.047042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.047061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.047075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.047094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.047107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.047126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.047141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.047160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.047173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.047192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.047213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.047233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.047247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.047267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.047281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.047300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.047314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.047333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.047347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.047366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.529 [2024-11-19 17:19:58.047379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:46.529 [2024-11-19 17:19:58.047398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.530 [2024-11-19 17:19:58.047412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.047431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.530 [2024-11-19 17:19:58.047446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.048199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.530 [2024-11-19 17:19:58.048229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.048256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.530 [2024-11-19 17:19:58.048275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.048297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.530 [2024-11-19 17:19:58.048313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.048337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.530 [2024-11-19 17:19:58.048354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.048405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.530 [2024-11-19 17:19:58.048420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.048466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.530 [2024-11-19 17:19:58.048482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.048501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.530 [2024-11-19 17:19:58.048515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.048534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.530 [2024-11-19 17:19:58.048547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.048566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.530 [2024-11-19 17:19:58.048580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.048611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.530 [2024-11-19 17:19:58.048625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.048645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.530 [2024-11-19 17:19:58.048659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.048694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.530 [2024-11-19 17:19:58.048709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.048728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.530 [2024-11-19 17:19:58.048743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.048763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.530 [2024-11-19 17:19:58.048778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.048798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.530 [2024-11-19 17:19:58.048812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.048848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.530 [2024-11-19 17:19:58.048866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.048886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.530 [2024-11-19 17:19:58.048901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.048929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.530 [2024-11-19 17:19:58.048945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.048965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.530 [2024-11-19 17:19:58.048980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.049000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.530 [2024-11-19 17:19:58.049014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.049034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.530 [2024-11-19 17:19:58.049049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.049069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.530 [2024-11-19 17:19:58.049084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.049118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.530 [2024-11-19 17:19:58.049132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.049151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.530 [2024-11-19 17:19:58.049181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.049200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.530 [2024-11-19 17:19:58.049214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.049232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.530 [2024-11-19 17:19:58.049246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.049265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:57048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.530 [2024-11-19 17:19:58.049279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.049298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:57056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.530 [2024-11-19 17:19:58.049312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.049331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.530 [2024-11-19 17:19:58.049344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.049363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.530 [2024-11-19 17:19:58.049383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.049403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.530 [2024-11-19 17:19:58.049417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.049436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.530 [2024-11-19 17:19:58.049450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.049469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.530 [2024-11-19 17:19:58.049499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.049519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.530 [2024-11-19 17:19:58.049533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.049553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.530 [2024-11-19 17:19:58.049567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.049587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.530 [2024-11-19 17:19:58.049601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.049620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.530 [2024-11-19 17:19:58.049635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:46.530 [2024-11-19 17:19:58.049654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.531 [2024-11-19 17:19:58.049669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.049688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.531 [2024-11-19 17:19:58.049702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.049721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.531 [2024-11-19 17:19:58.049735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.049787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.049818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.049871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.049897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.049920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.049937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.049959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.049975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.049997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.050013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.050034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.050051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.050072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.050088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.050110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.050126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.050163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.050178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.050199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.050238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.050276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.050313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.050335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:57248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.050350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.050371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.050386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.050407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.050422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.050459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.050475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.050496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.050511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.050532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:57288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.050548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.050569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.050585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.050635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.050672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.050695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.050711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.050733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:57320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.050749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.050770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:57328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.050785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.050808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.050823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.051530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.051572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.051631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.051649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.051672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.051689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.051724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.051742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.051764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.051780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.051802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.051818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.051858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.051876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.051898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.051914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.051936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.531 [2024-11-19 17:19:58.051952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:46.531 [2024-11-19 17:19:58.051974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.051989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.052011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.052027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.052062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.052082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.052104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.052126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.052148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.052164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.052185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.052201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.052223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.052249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.052273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.052289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.052311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.052327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.052377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.052392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.052411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.052425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.052445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.052473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.052492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.052506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.052525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.052538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.052557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.052571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.052590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.052604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.052624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.052638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.052656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.052670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.052689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.052709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.052729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.052743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.052780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.052807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.052835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.052851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.052873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.052889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.052911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.052927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.052949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.052965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.052987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.053003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.053024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.053040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.053062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.053093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.053144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.053173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.053191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.053205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.053223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.053237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.053264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.053279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.053298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.053311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.053330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.053344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.053363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:57376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.053377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.053396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:57384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.053410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.053429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.053443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:46.532 [2024-11-19 17:19:58.053461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:57400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.532 [2024-11-19 17:19:58.053475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.053494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.053508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.053526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.053540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.053559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.053573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.053591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.053605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.053624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.053637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.053663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.053677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.053696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.053710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.053729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.053743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.053795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.053811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.053846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.053866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.053889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.053905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.053927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.053943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.053965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.053981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.054009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.054026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.054048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.054064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.054086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.054117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.054182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.054197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.055075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.055146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.055188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.055204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.055224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.055238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.055257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.055271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.055291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.055305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.055324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.055339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.055358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.055372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.055391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.055405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.055424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.055438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.055457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.055471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.055490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.055505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.055540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.055554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.055574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.055596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.055617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.055631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.055651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.055665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.055685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.055699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.055718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.055733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.055784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.055800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.055822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.055837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.056029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.056151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.056179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.056197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.533 [2024-11-19 17:19:58.056219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.533 [2024-11-19 17:19:58.056236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.056259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.534 [2024-11-19 17:19:58.056280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.056302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.534 [2024-11-19 17:19:58.056318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.056355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.534 [2024-11-19 17:19:58.056370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.056414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.534 [2024-11-19 17:19:58.056430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.056465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:57040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.534 [2024-11-19 17:19:58.056494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.056513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.534 [2024-11-19 17:19:58.056527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.056546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.534 [2024-11-19 17:19:58.056560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.056579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.534 [2024-11-19 17:19:58.056593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.056612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.534 [2024-11-19 17:19:58.056626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.056645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.534 [2024-11-19 17:19:58.056659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.056678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.534 [2024-11-19 17:19:58.056692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.056710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.534 [2024-11-19 17:19:58.056724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.056743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.534 [2024-11-19 17:19:58.056775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.056815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.534 [2024-11-19 17:19:58.056833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.056855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.534 [2024-11-19 17:19:58.056871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.056908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.534 [2024-11-19 17:19:58.056926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.056949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.534 [2024-11-19 17:19:58.056965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.056987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.534 [2024-11-19 17:19:58.057003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.057025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.534 [2024-11-19 17:19:58.057040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.057062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.534 [2024-11-19 17:19:58.057093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.057159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.534 [2024-11-19 17:19:58.057172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.057191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.534 [2024-11-19 17:19:58.057205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.057225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.534 [2024-11-19 17:19:58.057239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.057258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:57192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.534 [2024-11-19 17:19:58.057272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.057291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.534 [2024-11-19 17:19:58.057305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.057324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.534 [2024-11-19 17:19:58.057338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.057356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.534 [2024-11-19 17:19:58.057370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.057401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.534 [2024-11-19 17:19:58.057417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.057436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.534 [2024-11-19 17:19:58.057450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.057469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.534 [2024-11-19 17:19:58.057483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.057502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.534 [2024-11-19 17:19:58.057515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.057534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.534 [2024-11-19 17:19:58.057548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.057567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.534 [2024-11-19 17:19:58.057581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.057599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.534 [2024-11-19 17:19:58.057613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.057633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:57280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.534 [2024-11-19 17:19:58.057647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.057665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.534 [2024-11-19 17:19:58.057679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.057698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:57296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.534 [2024-11-19 17:19:58.057712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.057731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.534 [2024-11-19 17:19:58.057793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:46.534 [2024-11-19 17:19:58.057818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.057834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.057857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.057887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.057912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.057929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.058649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.058677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.058703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.058719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.058739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:57352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.058770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.058793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.058808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.058851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.058868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.058890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.058906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.058927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.058943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.058965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.058981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.059012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.059028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.059050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.059066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.059117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.059157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.059179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.059193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.059212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.059226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.059245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.059259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.059278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.059292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.059311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.059324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.059343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.059357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.059376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.059390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.059408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.059423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.059441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.059455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.059474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.059488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.059506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.059520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.059539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.059553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.059578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.059593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.059612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.059626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.059645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.059659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.059678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.059692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.059711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.059725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.059743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.059790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.059825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.059845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.059868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.059884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.059906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.059921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.059943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.059959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.059981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.059996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.060018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.060034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.060081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.535 [2024-11-19 17:19:58.060100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:46.535 [2024-11-19 17:19:58.060122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.060137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.060160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.060175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.060197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.060213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.060234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.060250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.060272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.060298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.060320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.060350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.060386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.060400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.060419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.060434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.060453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.060467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.060486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:57384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.060515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.060534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:57392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.060548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.060567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.060589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.060609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.060623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.060642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:57416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.060656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.060675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.060689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.060708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:57432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.060722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.060741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.060793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.060829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.060849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.060872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.060888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.060910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.060926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.060948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.060964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.060986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.061002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.061024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.061040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.061063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.061079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.061140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.061186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.061205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.061219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.061238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.061252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.061272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.061287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.062074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.062103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.062161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.062194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.062215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.062247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.062268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.062283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.062305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.062326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.062348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.062364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:46.536 [2024-11-19 17:19:58.062385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.536 [2024-11-19 17:19:58.062401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.062422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.537 [2024-11-19 17:19:58.062437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.062470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.537 [2024-11-19 17:19:58.062487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.062509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.537 [2024-11-19 17:19:58.062524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.062546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.537 [2024-11-19 17:19:58.062562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.062598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.537 [2024-11-19 17:19:58.062640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.062685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.537 [2024-11-19 17:19:58.062702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.062740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.537 [2024-11-19 17:19:58.062771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.062793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.537 [2024-11-19 17:19:58.062810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.062832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.537 [2024-11-19 17:19:58.062848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.062885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.537 [2024-11-19 17:19:58.062906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.062929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.537 [2024-11-19 17:19:58.062946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.062969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.537 [2024-11-19 17:19:58.062985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.063007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.537 [2024-11-19 17:19:58.063026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.063048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.537 [2024-11-19 17:19:58.063089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.063141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.537 [2024-11-19 17:19:58.063170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.063189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.537 [2024-11-19 17:19:58.063203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.063227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.537 [2024-11-19 17:19:58.063242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.063261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.537 [2024-11-19 17:19:58.063275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.063295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.537 [2024-11-19 17:19:58.063309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.063328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:57032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.537 [2024-11-19 17:19:58.063342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.063361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:57040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.537 [2024-11-19 17:19:58.063376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.063395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.537 [2024-11-19 17:19:58.063409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.063428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.537 [2024-11-19 17:19:58.063442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.063461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.537 [2024-11-19 17:19:58.063476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.063495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.537 [2024-11-19 17:19:58.063509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.063528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.537 [2024-11-19 17:19:58.063547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.063568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.537 [2024-11-19 17:19:58.063582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.063601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.537 [2024-11-19 17:19:58.063615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.063634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.537 [2024-11-19 17:19:58.063648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.063667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.537 [2024-11-19 17:19:58.063682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.063701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.537 [2024-11-19 17:19:58.063714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.063733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.537 [2024-11-19 17:19:58.063763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.063805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.537 [2024-11-19 17:19:58.063821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.063856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.537 [2024-11-19 17:19:58.063874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.063895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.537 [2024-11-19 17:19:58.063911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.063933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.537 [2024-11-19 17:19:58.063948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.063969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.537 [2024-11-19 17:19:58.063985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.064006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.537 [2024-11-19 17:19:58.064029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:46.537 [2024-11-19 17:19:58.064082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.537 [2024-11-19 17:19:58.064101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.064124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.064140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.064162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.064178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.064201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.064217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.064239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.064254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.064276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.064292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.064314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:57232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.064345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.064387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.064402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.064421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.064435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.064455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.064469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.064504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.064518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.064537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:57272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.064551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.064577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.064592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.064611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.064625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.064644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.064658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.064677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:57304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.064690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.064709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:57312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.064723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.064743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.064774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.065524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.065549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.065573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.065590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.065609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.065623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.065642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.065656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.065676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.065690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.065709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.065723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.065768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.065802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.065838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.065858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.065880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.065896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.065918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.065933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.065954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.065970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.065991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.066006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.066033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.066049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.066070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.066085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.066106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.066137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.066187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.066201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.066221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.066235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.066254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.066268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.066287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.066308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.066328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.066343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.066362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.538 [2024-11-19 17:19:58.066376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:46.538 [2024-11-19 17:19:58.066395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.066409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.066428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.066442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.066461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.066475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.066494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.066508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.066527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.066541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.066560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.066574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.066596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.066639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.066663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.066678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.066698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.066713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.066733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.066774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.066799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.066832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.066858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.066874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.066896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.066912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.066934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.066950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.066972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.066988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.067024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.067040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.067061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.067076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.067098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.067127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.067148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.067163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.067212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.067226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.067245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.067258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.067277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.067291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.067318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.067332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.067351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:57368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.067365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.067384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.067398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.067417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:57384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.067431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.067450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.067463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.067482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.067496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.067515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.067529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.067548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.067562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.067581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.067594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.067614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.067628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.067647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.067661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.067680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.067694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.067719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.067734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.067769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.067799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.067819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.067846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.067868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.067883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.067904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.539 [2024-11-19 17:19:58.067918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:46.539 [2024-11-19 17:19:58.067938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.067953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.067972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.067987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.068007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.068022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.068068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.068087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.068907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.068933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.068958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.068974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.068994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.069009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.069028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.069053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.069076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.069090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.069125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.069139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.069158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.069172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.069190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.069204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.069223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.069237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.069256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.069270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.069288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.069302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.069321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.069335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.069354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.069368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.069387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.069400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.069419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.069433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.069452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.069472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.069493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.069507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.069526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.069540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.069559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.069573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.069591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.069605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.069624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.069638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.069657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.069670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.069689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.069703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.069721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.069735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.069783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.540 [2024-11-19 17:19:58.069803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.069824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.540 [2024-11-19 17:19:58.069839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.069859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.540 [2024-11-19 17:19:58.069874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.069894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.540 [2024-11-19 17:19:58.069909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.069937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.540 [2024-11-19 17:19:58.069953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.069973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.540 [2024-11-19 17:19:58.069987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.070007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.540 [2024-11-19 17:19:58.070022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:46.540 [2024-11-19 17:19:58.070042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.541 [2024-11-19 17:19:58.070056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.070076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.541 [2024-11-19 17:19:58.070091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.070125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.541 [2024-11-19 17:19:58.070154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.070173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.541 [2024-11-19 17:19:58.070187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.070206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.541 [2024-11-19 17:19:58.070220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.070240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.541 [2024-11-19 17:19:58.070255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.070274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.541 [2024-11-19 17:19:58.070288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.070306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.541 [2024-11-19 17:19:58.070320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.070340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.541 [2024-11-19 17:19:58.070354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.070381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.541 [2024-11-19 17:19:58.070396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.070415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.541 [2024-11-19 17:19:58.070429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.070448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.541 [2024-11-19 17:19:58.070462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.070481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.541 [2024-11-19 17:19:58.070494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.070513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.541 [2024-11-19 17:19:58.070527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.070546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.541 [2024-11-19 17:19:58.070560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.070579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.541 [2024-11-19 17:19:58.070598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.070631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.541 [2024-11-19 17:19:58.070649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.070668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.541 [2024-11-19 17:19:58.070682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.070701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.541 [2024-11-19 17:19:58.070715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.070734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.541 [2024-11-19 17:19:58.070763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.070798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.541 [2024-11-19 17:19:58.070817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.070837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.541 [2024-11-19 17:19:58.070860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.070881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.541 [2024-11-19 17:19:58.070895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.070915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.541 [2024-11-19 17:19:58.070929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.070948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.541 [2024-11-19 17:19:58.070963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.070982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:57264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.541 [2024-11-19 17:19:58.070996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.071015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.541 [2024-11-19 17:19:58.071030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.071049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:57280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.541 [2024-11-19 17:19:58.071063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.071082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.541 [2024-11-19 17:19:58.071097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.071116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.541 [2024-11-19 17:19:58.071145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.071169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.541 [2024-11-19 17:19:58.071184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.071203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.541 [2024-11-19 17:19:58.071218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.071878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.541 [2024-11-19 17:19:58.071904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.071928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.541 [2024-11-19 17:19:58.071956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.071978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:57336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.541 [2024-11-19 17:19:58.071993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.072013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.541 [2024-11-19 17:19:58.072028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.072073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.541 [2024-11-19 17:19:58.072092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.072114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.541 [2024-11-19 17:19:58.072130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.072152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.541 [2024-11-19 17:19:58.072167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:46.541 [2024-11-19 17:19:58.072188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.072204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.072225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.072241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.072262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.072278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.072299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.072315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.072351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.072366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.072416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.072430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.072449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.072478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.072505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.072520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.072539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.072553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.072572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.072586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.072605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.072619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.072638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.072652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.072670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.072685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.072704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.072717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.072736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.072766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.072785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.072800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.072819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.072848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.072870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.072885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.072905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.072920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.072950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.072965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.072985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.072999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.073019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.073033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.073052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.073067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.073086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.073101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.073134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.073148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.073167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.073180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.073200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.073214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.073233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.073247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.073265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.073279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.073298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.073312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.073331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.073345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.073364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.073384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.073404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.073418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.073437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.073451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.073470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.073484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.073503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.073517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.073536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.073550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.073568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.073582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.073601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:57368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.542 [2024-11-19 17:19:58.073615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:46.542 [2024-11-19 17:19:58.073634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:57376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.073648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.073667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.073681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.073700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:57392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.073714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.073732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:57400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.073746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.073796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.073820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.073842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:57416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.073857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.073877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.073892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.073912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.073927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.073946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.073961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.073981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.073996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.074015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.074030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.074050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.074080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.074101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.074131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.074150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.074179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.074214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.074229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.074249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.074264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.074283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.074298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.074326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.074342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.075310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.075335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.075360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.075376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.075395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.075410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.075429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.075443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.075462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.075476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.075495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.075509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.075528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.075542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.075561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.075575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.075594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.075608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.075627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.075641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.075660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.075674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.075703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.075719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.075738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.075752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.075771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.075785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.075804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.075818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.075854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.075870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.075889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.075903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.075922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.075936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.075955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.075969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.075988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.076002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:46.543 [2024-11-19 17:19:58.076020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.543 [2024-11-19 17:19:58.076034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.076080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.544 [2024-11-19 17:19:58.076096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.076117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.544 [2024-11-19 17:19:58.076132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.076152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.544 [2024-11-19 17:19:58.076175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.076197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.544 [2024-11-19 17:19:58.076213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.076233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.544 [2024-11-19 17:19:58.076248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.076269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.544 [2024-11-19 17:19:58.076284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.076305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.544 [2024-11-19 17:19:58.076320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.076341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.544 [2024-11-19 17:19:58.076356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.076406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.544 [2024-11-19 17:19:58.076420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.076440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.544 [2024-11-19 17:19:58.076455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.076488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.544 [2024-11-19 17:19:58.076502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.076521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.544 [2024-11-19 17:19:58.076535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.076555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.544 [2024-11-19 17:19:58.076569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.076588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.544 [2024-11-19 17:19:58.076602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.076621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.544 [2024-11-19 17:19:58.076641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.076661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.544 [2024-11-19 17:19:58.076675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.076694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.544 [2024-11-19 17:19:58.076708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.076727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.544 [2024-11-19 17:19:58.076741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.076760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.544 [2024-11-19 17:19:58.076773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.076792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.544 [2024-11-19 17:19:58.076820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.076840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.544 [2024-11-19 17:19:58.076855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.076874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.544 [2024-11-19 17:19:58.076889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.076908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.544 [2024-11-19 17:19:58.076922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.076941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.544 [2024-11-19 17:19:58.076955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.076974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.544 [2024-11-19 17:19:58.076988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.077007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.544 [2024-11-19 17:19:58.077021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.077040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.544 [2024-11-19 17:19:58.077054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.077080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.544 [2024-11-19 17:19:58.077095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.077114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.544 [2024-11-19 17:19:58.077128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.077147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.544 [2024-11-19 17:19:58.077161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.077179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:57216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.544 [2024-11-19 17:19:58.077193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.077212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.544 [2024-11-19 17:19:58.077226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.077245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.544 [2024-11-19 17:19:58.077259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.077277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.544 [2024-11-19 17:19:58.077291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.077310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.544 [2024-11-19 17:19:58.077324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.077343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:57256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.544 [2024-11-19 17:19:58.077357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.077375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.544 [2024-11-19 17:19:58.077389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.077408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.544 [2024-11-19 17:19:58.077422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:46.544 [2024-11-19 17:19:58.077441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.544 [2024-11-19 17:19:58.077455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.077480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:57288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.077495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.077514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:57296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.077528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.077548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.077562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.078252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.078277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.078301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.078316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.078335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.078349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.078369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.078382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.078401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.078415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.078434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:57352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.078448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.078466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.078480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.078499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.078513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.078531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.078546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.078565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.078589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.078625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.078640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.078659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.078674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.078693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.078707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.078726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.078741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.078780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.078800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.078820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.078835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.078854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.078869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.078889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.078903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.078922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.078937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.078956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.078971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.079004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.079018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.079037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.079058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.079079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.079093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.079112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.079126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.079145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.079159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.079178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.079192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.079211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.079225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.079244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.079258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.079277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.079291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.079310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.079324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.079344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.545 [2024-11-19 17:19:58.079358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:46.545 [2024-11-19 17:19:58.079377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.079390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.079409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.079423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.079442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.079456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.079483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.079498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.079517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.079531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.079550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.079564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.079583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.079597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.079615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.079629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.079648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.079662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.079681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.079695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.079714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.079727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.079746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.079773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.079794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.079809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.079828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.079841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.079860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.079875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.079901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:57368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.079916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.079936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.079949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.079968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.079982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.080001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.080015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.080034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.080073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.080095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.080110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.080131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.080146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.080166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.080181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.080202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.080217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.080238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.080253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.080273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.080288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.080308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.080323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.080344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.080381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.080416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.080431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.080464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.080477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.080497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.080510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.080529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.080543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.080563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.080577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.081286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.081311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.081335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.081350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.081371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.081385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.081404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.081418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.081437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.081451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.081470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.546 [2024-11-19 17:19:58.081484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:46.546 [2024-11-19 17:19:58.081502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.547 [2024-11-19 17:19:58.081526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.081548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.547 [2024-11-19 17:19:58.081562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.081581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.547 [2024-11-19 17:19:58.081595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.081614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.547 [2024-11-19 17:19:58.081628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.081647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.547 [2024-11-19 17:19:58.081661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.081680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.547 [2024-11-19 17:19:58.081694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.081713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.547 [2024-11-19 17:19:58.081727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.081778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.547 [2024-11-19 17:19:58.081796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.081816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.547 [2024-11-19 17:19:58.081830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.081849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.547 [2024-11-19 17:19:58.081864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.081883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.547 [2024-11-19 17:19:58.081898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.081917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.547 [2024-11-19 17:19:58.081931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.081950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.547 [2024-11-19 17:19:58.081965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.081991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.547 [2024-11-19 17:19:58.082006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.082026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.547 [2024-11-19 17:19:58.082041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.082060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.547 [2024-11-19 17:19:58.082075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.082094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.547 [2024-11-19 17:19:58.082108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.082128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.547 [2024-11-19 17:19:58.082157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.082176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.547 [2024-11-19 17:19:58.082190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.082209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.547 [2024-11-19 17:19:58.082223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.082243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.547 [2024-11-19 17:19:58.082257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.082277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.547 [2024-11-19 17:19:58.082291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.082310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.547 [2024-11-19 17:19:58.082324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.082342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.547 [2024-11-19 17:19:58.082356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.082376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.547 [2024-11-19 17:19:58.082390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.082415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.547 [2024-11-19 17:19:58.082430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.082449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.547 [2024-11-19 17:19:58.082463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.082482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.547 [2024-11-19 17:19:58.082496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.082514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.547 [2024-11-19 17:19:58.082528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.082547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.547 [2024-11-19 17:19:58.082561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.082580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.547 [2024-11-19 17:19:58.082594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.082629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.547 [2024-11-19 17:19:58.082643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.082663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.547 [2024-11-19 17:19:58.082678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.082698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.547 [2024-11-19 17:19:58.082712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.082731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.547 [2024-11-19 17:19:58.082745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.082765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.547 [2024-11-19 17:19:58.082792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.082817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.547 [2024-11-19 17:19:58.082832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.082852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.547 [2024-11-19 17:19:58.082873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.082894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.547 [2024-11-19 17:19:58.082908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:46.547 [2024-11-19 17:19:58.082928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.082942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.082976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.082990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.083009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.083023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.083042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.083055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.083074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.083088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.083107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.083121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.083140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.083154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.083173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.083186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.083205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.083219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.083238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.083253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.083272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.083292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.083312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:57248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.083326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.083345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.083360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.083378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:57264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.083392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.083411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.083425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.083444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.083458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.083477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.083490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.083510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.083524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.084191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.084216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.084241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.084258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.084280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:57320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.084296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.084317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.084332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.084352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.084383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.084442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.084458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.084477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.084491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.084509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.084523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.084542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.084556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.084576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.084590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.084609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.084623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.084642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.084655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.084674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.084688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.084707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.084721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.084740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.084754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.084773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.084787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.084821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.084838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.084865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.084881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.084900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.084914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.084933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.084947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.084966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.548 [2024-11-19 17:19:58.084980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:46.548 [2024-11-19 17:19:58.084999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.085032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.085065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.085099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.085132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.085165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.085198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.085230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.085263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.085303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.085336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.085369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.085402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.085435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.085467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.085501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.085534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.085567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.085600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.085633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.085665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.085705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.085769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.085809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.085843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.085877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.085911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.085944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:57376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.085978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:57384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.085993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.086012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.086026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.086046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:57400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.086061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.086081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.086095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.086114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.086128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.086170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:57424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.086185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.086204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.549 [2024-11-19 17:19:58.086218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:46.549 [2024-11-19 17:19:58.086237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.086251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.086270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.086284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.086319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.086333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.086353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.086367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.086387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.086401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.086436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.086451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.086471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.086486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.086506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.086521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.087413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.087455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.087480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.087497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.087542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.087558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.087577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.087591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.087610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.087624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.087643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.087657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.087676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.087690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.087709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.087723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.087742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.087756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.087775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.087789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.087807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.087821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.087858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.087873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.087893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.087907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.087926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.087940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.087959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.087980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.088000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.088015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.088034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.088074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.088112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.088128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.088148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.088164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.088184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.088199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.088220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.088235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.088256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.088271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.088292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.088307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.088327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.088342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.088363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.088393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.088426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.088441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.088460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.088484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.088519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.550 [2024-11-19 17:19:58.088533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.088552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.550 [2024-11-19 17:19:58.088566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.088586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.550 [2024-11-19 17:19:58.088600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.088619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.550 [2024-11-19 17:19:58.088633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:46.550 [2024-11-19 17:19:58.088652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.551 [2024-11-19 17:19:58.088666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.088685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.551 [2024-11-19 17:19:58.088699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.088718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.551 [2024-11-19 17:19:58.088732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.088751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.551 [2024-11-19 17:19:58.088765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.088784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.551 [2024-11-19 17:19:58.088798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.088829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.551 [2024-11-19 17:19:58.088847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.088867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.551 [2024-11-19 17:19:58.088882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.088901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.551 [2024-11-19 17:19:58.088915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.088942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.551 [2024-11-19 17:19:58.088957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.088976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.551 [2024-11-19 17:19:58.088990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.089009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.551 [2024-11-19 17:19:58.089022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.089042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.551 [2024-11-19 17:19:58.089056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.089075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.551 [2024-11-19 17:19:58.089089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.089108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.551 [2024-11-19 17:19:58.089122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.089141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.551 [2024-11-19 17:19:58.089155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.089174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.551 [2024-11-19 17:19:58.089187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.089206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.551 [2024-11-19 17:19:58.089221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.089240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.551 [2024-11-19 17:19:58.089254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.089273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.551 [2024-11-19 17:19:58.089286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.089305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.551 [2024-11-19 17:19:58.089319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.089344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:57200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.551 [2024-11-19 17:19:58.089359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.089378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.551 [2024-11-19 17:19:58.089393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.089411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.551 [2024-11-19 17:19:58.089425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.089445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.551 [2024-11-19 17:19:58.089459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.089478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.551 [2024-11-19 17:19:58.089508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.089527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.551 [2024-11-19 17:19:58.089541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.089577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.551 [2024-11-19 17:19:58.089591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.089611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.551 [2024-11-19 17:19:58.089626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.089646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.551 [2024-11-19 17:19:58.089677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.089697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:57272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.551 [2024-11-19 17:19:58.089712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.089733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:57280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.551 [2024-11-19 17:19:58.089764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.089786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.551 [2024-11-19 17:19:58.089814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.090113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.551 [2024-11-19 17:19:58.090180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.090223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.551 [2024-11-19 17:19:58.090243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.090268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.551 [2024-11-19 17:19:58.090284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.090309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.551 [2024-11-19 17:19:58.090324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.090363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.551 [2024-11-19 17:19:58.090378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.090402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:57336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.551 [2024-11-19 17:19:58.090417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.090455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.551 [2024-11-19 17:19:58.090469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:46.551 [2024-11-19 17:19:58.090492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.090507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.090530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.090544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.090584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.090598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.090622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.090637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.090676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.090691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.090715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.090739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.090765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.090781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.090820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.090836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.090875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.090891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.090950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.090967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.090991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.091006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.091030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.091044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.091067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.091082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.091119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.091133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.091170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.091184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.091206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.091220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.091242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.091256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.091278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.091292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.091322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.091337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.091359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.091373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.091396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.091410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.091432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.091446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.091468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.091482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.098091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.098125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.098152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.098167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.098206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.098220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.098242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.098256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.098278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.098292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.098315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.098328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.098350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.098364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.098399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.098416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.098438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.098452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.098474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.098487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.098510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.098524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.098561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.098575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.098597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.098611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.098634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.098648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.098671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.098684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:46.552 [2024-11-19 17:19:58.098707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.552 [2024-11-19 17:19:58.098721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:19:58.098744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:19:58.098757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:19:58.098795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:19:58.098812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:19:58.098851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:19:58.098882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:19:58.098906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:19:58.098945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:19:58.098971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:19:58.098987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:19:58.099012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:19:58.099027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:19:58.099051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:19:58.099066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:19:58.099091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:19:58.099107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:19:58.099132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:19:58.099147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:19:58.099171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:19:58.099186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:19:58.099211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:19:58.099226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:19:58.099251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:19:58.099266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:19:58.099305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:19:58.099335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:19:58.099374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:19:58.099389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:19:58.099413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:19:58.099428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:19:58.099452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:19:58.099473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:19:58.099498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:19:58.099513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:19:58.099538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:19:58.099569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:19:58.099779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:19:58.099802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:46.553 10305.53 IOPS, 40.26 MiB/s [2024-11-19T17:20:39.086Z] 9721.38 IOPS, 37.97 MiB/s [2024-11-19T17:20:39.086Z] 9738.06 IOPS, 38.04 MiB/s [2024-11-19T17:20:39.086Z] 9722.61 IOPS, 37.98 MiB/s [2024-11-19T17:20:39.086Z] 9728.26 IOPS, 38.00 MiB/s [2024-11-19T17:20:39.086Z] 9715.10 IOPS, 37.95 MiB/s [2024-11-19T17:20:39.086Z] 9711.76 IOPS, 37.94 MiB/s [2024-11-19T17:20:39.086Z] [2024-11-19 17:20:05.166241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.553 [2024-11-19 17:20:05.166306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:20:05.166363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:20:05.166383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:20:05.166405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:20:05.166420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:20:05.166440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:20:05.166455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:20:05.166474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:20:05.166488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:20:05.166507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:20:05.166522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:20:05.166542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:20:05.166556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:20:05.166608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:20:05.166624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:20:05.168185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:20:05.168218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:20:05.168250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:20:05.168268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:20:05.168293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:20:05.168311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:20:05.168337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:20:05.168353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:20:05.168406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:20:05.168422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:20:05.168463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:20:05.168496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:20:05.168555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:20:05.168586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:20:05.168611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:20:05.168628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:20:05.168810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.553 [2024-11-19 17:20:05.168840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:46.553 [2024-11-19 17:20:05.168899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.168921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.168964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.168996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.169021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.169037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.169062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.169106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.169133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.169149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.169173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.169189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.169213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.169229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.169253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.169268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.169292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.169308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.169332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.169362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.169385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.169401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.169424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.169439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.169465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.169480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.169503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.169518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.169543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.169574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.169598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.169629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.169662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.169696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.169722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.169739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.169779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.169802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.169830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.169868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.169898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.169915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.169941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.169958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.169984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.170000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.170026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.170042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.170067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.170098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.170154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.170169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.170194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.170210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.170250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.170268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.170304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.170322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.170348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.170364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.170390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.554 [2024-11-19 17:20:05.170406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.170432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.554 [2024-11-19 17:20:05.170448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.170474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.554 [2024-11-19 17:20:05.170491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.170517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.554 [2024-11-19 17:20:05.170548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:46.554 [2024-11-19 17:20:05.170590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.554 [2024-11-19 17:20:05.170606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.170633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.170650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.170677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.170693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.170720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.170737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.170764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.170780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.170806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.170823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.170872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.170898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.170927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.170944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.170984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.171016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.171041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.171056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.171081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.171097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.171122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.171137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.171162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.171178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.171202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.171232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.171256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.171271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.171294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.171311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.171334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.171349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.171373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.171388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.171412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.171434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.171459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.171474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.171499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.171514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.171537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.171553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.171577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.171592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.171616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.171631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.171655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.171670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.171694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.171709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.171733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.171748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.171772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.171798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.171842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.171859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.171899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.171915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.171940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.171971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.172009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.172026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.172064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.172083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.172110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.172126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.172152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.172169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.172194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.172212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:46.555 [2024-11-19 17:20:05.172239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-11-19 17:20:05.172256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:46.555 9704.14 IOPS, 37.91 MiB/s [2024-11-19T17:20:39.088Z] 9282.22 IOPS, 36.26 MiB/s [2024-11-19T17:20:39.088Z] 8895.46 IOPS, 34.75 MiB/s [2024-11-19T17:20:39.088Z] 8539.64 IOPS, 33.36 MiB/s [2024-11-19T17:20:39.088Z] 8211.19 IOPS, 32.07 MiB/s [2024-11-19T17:20:39.088Z] 7907.07 IOPS, 30.89 MiB/s [2024-11-19T17:20:39.088Z] 7624.68 IOPS, 29.78 MiB/s [2024-11-19T17:20:39.088Z] 7371.69 IOPS, 28.80 MiB/s [2024-11-19T17:20:39.088Z] 7453.03 IOPS, 29.11 MiB/s [2024-11-19T17:20:39.088Z] 7532.10 IOPS, 29.42 MiB/s [2024-11-19T17:20:39.088Z] 7604.53 IOPS, 29.71 MiB/s [2024-11-19T17:20:39.088Z] 7670.82 IOPS, 29.96 MiB/s [2024-11-19T17:20:39.088Z] 7733.62 IOPS, 30.21 MiB/s [2024-11-19T17:20:39.088Z] 7792.00 IOPS, 30.44 MiB/s [2024-11-19T17:20:39.088Z] [2024-11-19 17:20:18.623817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:117992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.556 [2024-11-19 17:20:18.623862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.623889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.623905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.623920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.623933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.623947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.623961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.623975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.623988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.624035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.624090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.624117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.624145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.624172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.624199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.624226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.624253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.624285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.624327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.624354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.624382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:118000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.556 [2024-11-19 17:20:18.624434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:118008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.556 [2024-11-19 17:20:18.624463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:118016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.556 [2024-11-19 17:20:18.624505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:118024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.556 [2024-11-19 17:20:18.624549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.556 [2024-11-19 17:20:18.624577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:118040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.556 [2024-11-19 17:20:18.624604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.556 [2024-11-19 17:20:18.624649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:118056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.556 [2024-11-19 17:20:18.624678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:118064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.556 [2024-11-19 17:20:18.624722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.624765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.624794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.624823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.624869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.624910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.624954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.624982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.624997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.625010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.625024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:118320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.625038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.625052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.625066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.625080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.625094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.625108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.625121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.625136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:118352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.625149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-11-19 17:20:18.625164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.556 [2024-11-19 17:20:18.625177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:118368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.625205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:118376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.625232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.625266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:118392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.625294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:118400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.625322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:118408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.625350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:118416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.625378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.625406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:118432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.625434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:118440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.625461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:118448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.625488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:118456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.625516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.625543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:118472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.625570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.625597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:118488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.625631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:118496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.625658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:118504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.625686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:118512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.625713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:118520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.625741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:118528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.625781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:118536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.625809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:118544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.625838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.625865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:118560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.625893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.625920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:118576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.625948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:118584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.625976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.625999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:118592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.626013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.626027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:118600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.626040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.626055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:118608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.626068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.626082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.626095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.626110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:118624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.626123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.626137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:118632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.557 [2024-11-19 17:20:18.626150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.626188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.557 [2024-11-19 17:20:18.626204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118640 len:8 PRP1 0x0 PRP2 0x0 00:21:46.557 [2024-11-19 17:20:18.626217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.626234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.557 [2024-11-19 17:20:18.626246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.557 [2024-11-19 17:20:18.626257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118648 len:8 PRP1 0x0 PRP2 0x0 00:21:46.557 [2024-11-19 17:20:18.626270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.626283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.557 [2024-11-19 17:20:18.626293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.557 [2024-11-19 17:20:18.626303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118656 len:8 PRP1 0x0 PRP2 0x0 00:21:46.557 [2024-11-19 17:20:18.626315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.557 [2024-11-19 17:20:18.626333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.557 [2024-11-19 17:20:18.626344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.558 [2024-11-19 17:20:18.626354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118664 len:8 PRP1 0x0 PRP2 0x0 00:21:46.558 [2024-11-19 17:20:18.626367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.558 [2024-11-19 17:20:18.626387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.558 [2024-11-19 17:20:18.626398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.558 [2024-11-19 17:20:18.626408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118672 len:8 PRP1 0x0 PRP2 0x0 00:21:46.558 [2024-11-19 17:20:18.626420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.558 [2024-11-19 17:20:18.626433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.558 [2024-11-19 17:20:18.626443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.558 [2024-11-19 17:20:18.626453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118680 len:8 PRP1 0x0 PRP2 0x0 00:21:46.558 [2024-11-19 17:20:18.626465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.558 [2024-11-19 17:20:18.626478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.558 [2024-11-19 17:20:18.626488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.558 [2024-11-19 17:20:18.626498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118688 len:8 PRP1 0x0 PRP2 0x0 00:21:46.558 [2024-11-19 17:20:18.626510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.558 [2024-11-19 17:20:18.626523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.558 [2024-11-19 17:20:18.626532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.558 [2024-11-19 17:20:18.626542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118696 len:8 PRP1 0x0 PRP2 0x0 00:21:46.558 [2024-11-19 17:20:18.626554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.558 [2024-11-19 17:20:18.626567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.558 [2024-11-19 17:20:18.626577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.558 [2024-11-19 17:20:18.626586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118704 len:8 PRP1 0x0 PRP2 0x0 00:21:46.558 [2024-11-19 17:20:18.626599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.558 [2024-11-19 17:20:18.626611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.558 [2024-11-19 17:20:18.626621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.558 [2024-11-19 17:20:18.626632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118712 len:8 PRP1 0x0 PRP2 0x0 00:21:46.558 [2024-11-19 17:20:18.626645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.558 [2024-11-19 17:20:18.626657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.558 [2024-11-19 17:20:18.626667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.558 [2024-11-19 17:20:18.626677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118720 len:8 PRP1 0x0 PRP2 0x0 00:21:46.558 [2024-11-19 17:20:18.626689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.558 [2024-11-19 17:20:18.626707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.558 [2024-11-19 17:20:18.626717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.558 [2024-11-19 17:20:18.626727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118728 len:8 PRP1 0x0 PRP2 0x0 00:21:46.558 [2024-11-19 17:20:18.626763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.558 [2024-11-19 17:20:18.626778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.558 [2024-11-19 17:20:18.626789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.558 [2024-11-19 17:20:18.626799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118736 len:8 PRP1 0x0 PRP2 0x0 00:21:46.558 [2024-11-19 17:20:18.626811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.558 [2024-11-19 17:20:18.626824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.558 [2024-11-19 17:20:18.626834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.558 [2024-11-19 17:20:18.626845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118744 len:8 PRP1 0x0 PRP2 0x0 00:21:46.558 [2024-11-19 17:20:18.626863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.558 [2024-11-19 17:20:18.626877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.558 [2024-11-19 17:20:18.626886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.558 [2024-11-19 17:20:18.626896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118752 len:8 PRP1 0x0 PRP2 0x0 00:21:46.558 [2024-11-19 17:20:18.626909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.558 [2024-11-19 17:20:18.626922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.558 [2024-11-19 17:20:18.626932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.558 [2024-11-19 17:20:18.626941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118760 len:8 PRP1 0x0 PRP2 0x0 00:21:46.558 [2024-11-19 17:20:18.626954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.558 [2024-11-19 17:20:18.626966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.558 [2024-11-19 17:20:18.626976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.558 [2024-11-19 17:20:18.626986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118768 len:8 PRP1 0x0 PRP2 0x0 00:21:46.558 [2024-11-19 17:20:18.626998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.558 [2024-11-19 17:20:18.627011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.558 [2024-11-19 17:20:18.627020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.558 [2024-11-19 17:20:18.627031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118776 len:8 PRP1 0x0 PRP2 0x0 00:21:46.558 [2024-11-19 17:20:18.627043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.558 [2024-11-19 17:20:18.627056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.558 [2024-11-19 17:20:18.627065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.558 [2024-11-19 17:20:18.627075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118784 len:8 PRP1 0x0 PRP2 0x0 00:21:46.558 [2024-11-19 17:20:18.627088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.558 [2024-11-19 17:20:18.627105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.558 [2024-11-19 17:20:18.627116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.558 [2024-11-19 17:20:18.627131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118792 len:8 PRP1 0x0 PRP2 0x0 00:21:46.558 [2024-11-19 17:20:18.627145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.558 [2024-11-19 17:20:18.627158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.558 [2024-11-19 17:20:18.627168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.558 [2024-11-19 17:20:18.627178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118800 len:8 PRP1 0x0 PRP2 0x0 00:21:46.558 [2024-11-19 17:20:18.627190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.558 [2024-11-19 17:20:18.627203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.558 [2024-11-19 17:20:18.627213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.558 [2024-11-19 17:20:18.627223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118808 len:8 PRP1 0x0 PRP2 0x0 00:21:46.558 [2024-11-19 17:20:18.627240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.558 [2024-11-19 17:20:18.627253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.558 [2024-11-19 17:20:18.627263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.558 [2024-11-19 17:20:18.627273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118816 len:8 PRP1 0x0 PRP2 0x0 00:21:46.558 [2024-11-19 17:20:18.627285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.558 [2024-11-19 17:20:18.627298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.558 [2024-11-19 17:20:18.627307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.558 [2024-11-19 17:20:18.627318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118824 len:8 PRP1 0x0 PRP2 0x0 00:21:46.558 [2024-11-19 17:20:18.627330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.558 [2024-11-19 17:20:18.627343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.558 [2024-11-19 17:20:18.627353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.558 [2024-11-19 17:20:18.627363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118832 len:8 PRP1 0x0 PRP2 0x0 00:21:46.559 [2024-11-19 17:20:18.627376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.559 [2024-11-19 17:20:18.627388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.559 [2024-11-19 17:20:18.627398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.559 [2024-11-19 17:20:18.627407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118840 len:8 PRP1 0x0 PRP2 0x0 00:21:46.559 [2024-11-19 17:20:18.627420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.559 [2024-11-19 17:20:18.627433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.559 [2024-11-19 17:20:18.627443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.559 [2024-11-19 17:20:18.627452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118848 len:8 PRP1 0x0 PRP2 0x0 00:21:46.559 [2024-11-19 17:20:18.627465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.559 [2024-11-19 17:20:18.627482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.559 [2024-11-19 17:20:18.627498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.559 [2024-11-19 17:20:18.627509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118856 len:8 PRP1 0x0 PRP2 0x0 00:21:46.559 [2024-11-19 17:20:18.627521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.559 [2024-11-19 17:20:18.627534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.559 [2024-11-19 17:20:18.627544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.559 [2024-11-19 17:20:18.627554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118864 len:8 PRP1 0x0 PRP2 0x0 00:21:46.559 [2024-11-19 17:20:18.627566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.559 [2024-11-19 17:20:18.627579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.559 [2024-11-19 17:20:18.627589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.559 [2024-11-19 17:20:18.627600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118872 len:8 PRP1 0x0 PRP2 0x0 00:21:46.559 [2024-11-19 17:20:18.627617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.559 [2024-11-19 17:20:18.627630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.559 [2024-11-19 17:20:18.627640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.559 [2024-11-19 17:20:18.627650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118880 len:8 PRP1 0x0 PRP2 0x0 00:21:46.559 [2024-11-19 17:20:18.627662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.559 [2024-11-19 17:20:18.627675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.559 [2024-11-19 17:20:18.627685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.559 [2024-11-19 17:20:18.627695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118888 len:8 PRP1 0x0 PRP2 0x0 00:21:46.559 [2024-11-19 17:20:18.627707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.559 [2024-11-19 17:20:18.627720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.559 [2024-11-19 17:20:18.627729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.559 [2024-11-19 17:20:18.627782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118896 len:8 PRP1 0x0 PRP2 0x0 00:21:46.559 [2024-11-19 17:20:18.627806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.559 [2024-11-19 17:20:18.627819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.559 [2024-11-19 17:20:18.627830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.559 [2024-11-19 17:20:18.627840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118904 len:8 PRP1 0x0 PRP2 0x0 00:21:46.559 [2024-11-19 17:20:18.627852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.559 [2024-11-19 17:20:18.627884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.559 [2024-11-19 17:20:18.627896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.559 [2024-11-19 17:20:18.627906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118912 len:8 PRP1 0x0 PRP2 0x0 00:21:46.559 [2024-11-19 17:20:18.627919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.559 [2024-11-19 17:20:18.627947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.559 [2024-11-19 17:20:18.627959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.559 [2024-11-19 17:20:18.627969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118920 len:8 PRP1 0x0 PRP2 0x0 00:21:46.559 [2024-11-19 17:20:18.627982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.559 [2024-11-19 17:20:18.627995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.559 [2024-11-19 17:20:18.628006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.559 [2024-11-19 17:20:18.628015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118928 len:8 PRP1 0x0 PRP2 0x0 00:21:46.559 [2024-11-19 17:20:18.628028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.559 [2024-11-19 17:20:18.628041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.559 [2024-11-19 17:20:18.628067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.559 [2024-11-19 17:20:18.628079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118936 len:8 PRP1 0x0 PRP2 0x0 00:21:46.559 [2024-11-19 17:20:18.628097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.559 [2024-11-19 17:20:18.628110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.559 [2024-11-19 17:20:18.628120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.559 [2024-11-19 17:20:18.628131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118944 len:8 PRP1 0x0 PRP2 0x0 00:21:46.559 [2024-11-19 17:20:18.628144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.559 [2024-11-19 17:20:18.628157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.559 [2024-11-19 17:20:18.628166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.559 [2024-11-19 17:20:18.628176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118952 len:8 PRP1 0x0 PRP2 0x0 00:21:46.559 [2024-11-19 17:20:18.628189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.559 [2024-11-19 17:20:18.628202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.559 [2024-11-19 17:20:18.628212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.559 [2024-11-19 17:20:18.628222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118960 len:8 PRP1 0x0 PRP2 0x0 00:21:46.559 [2024-11-19 17:20:18.628234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.559 [2024-11-19 17:20:18.628247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.559 [2024-11-19 17:20:18.628257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.559 [2024-11-19 17:20:18.628267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118968 len:8 PRP1 0x0 PRP2 0x0 00:21:46.559 [2024-11-19 17:20:18.628279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.559 [2024-11-19 17:20:18.628292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.559 [2024-11-19 17:20:18.628301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.559 [2024-11-19 17:20:18.628311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118976 len:8 PRP1 0x0 PRP2 0x0 00:21:46.559 [2024-11-19 17:20:18.628331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.559 [2024-11-19 17:20:18.628351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.559 [2024-11-19 17:20:18.628361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.559 [2024-11-19 17:20:18.628372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118984 len:8 PRP1 0x0 PRP2 0x0 00:21:46.559 [2024-11-19 17:20:18.628384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.559 [2024-11-19 17:20:18.636114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.559 [2024-11-19 17:20:18.636145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.559 [2024-11-19 17:20:18.636159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118992 len:8 PRP1 0x0 PRP2 0x0 00:21:46.559 [2024-11-19 17:20:18.636190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.559 [2024-11-19 17:20:18.636205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.559 [2024-11-19 17:20:18.636216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.559 [2024-11-19 17:20:18.636226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119000 len:8 PRP1 0x0 PRP2 0x0 00:21:46.559 [2024-11-19 17:20:18.636240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.559 [2024-11-19 17:20:18.636254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.559 [2024-11-19 17:20:18.636264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.559 [2024-11-19 17:20:18.636274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119008 len:8 PRP1 0x0 PRP2 0x0 00:21:46.559 [2024-11-19 17:20:18.636287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.559 [2024-11-19 17:20:18.636300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.559 [2024-11-19 17:20:18.636310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.559 [2024-11-19 17:20:18.636335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118072 len:8 PRP1 0x0 PRP2 0x0 00:21:46.559 [2024-11-19 17:20:18.636363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.559 [2024-11-19 17:20:18.636375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.560 [2024-11-19 17:20:18.636385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.560 [2024-11-19 17:20:18.636394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118080 len:8 PRP1 0x0 PRP2 0x0 00:21:46.560 [2024-11-19 17:20:18.636406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.560 [2024-11-19 17:20:18.636418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.560 [2024-11-19 17:20:18.636439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.560 [2024-11-19 17:20:18.636448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118088 len:8 PRP1 0x0 PRP2 0x0 00:21:46.560 [2024-11-19 17:20:18.636460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.560 [2024-11-19 17:20:18.636472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.560 [2024-11-19 17:20:18.636494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.560 [2024-11-19 17:20:18.636506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118096 len:8 PRP1 0x0 PRP2 0x0 00:21:46.560 [2024-11-19 17:20:18.636532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.560 [2024-11-19 17:20:18.636545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.560 [2024-11-19 17:20:18.636554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.560 [2024-11-19 17:20:18.636563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118104 len:8 PRP1 0x0 PRP2 0x0 00:21:46.560 [2024-11-19 17:20:18.636574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.560 [2024-11-19 17:20:18.636586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.560 [2024-11-19 17:20:18.636595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.560 [2024-11-19 17:20:18.636913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118112 len:8 PRP1 0x0 PRP2 0x0 00:21:46.560 [2024-11-19 17:20:18.636928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.560 [2024-11-19 17:20:18.636946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.560 [2024-11-19 17:20:18.637172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.560 [2024-11-19 17:20:18.637184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118120 len:8 PRP1 0x0 PRP2 0x0 00:21:46.560 [2024-11-19 17:20:18.637214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.560 [2024-11-19 17:20:18.637421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.560 [2024-11-19 17:20:18.637447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.560 [2024-11-19 17:20:18.637464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.560 [2024-11-19 17:20:18.637477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.560 [2024-11-19 17:20:18.637491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.560 [2024-11-19 17:20:18.637504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.560 [2024-11-19 17:20:18.637518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.560 [2024-11-19 17:20:18.637531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.560 [2024-11-19 17:20:18.637544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1190 is same with the state(6) to be set 00:21:46.560 [2024-11-19 17:20:18.639199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:46.560 [2024-11-19 17:20:18.639240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e1190 (9): Bad file descriptor 00:21:46.560 [2024-11-19 17:20:18.639363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.560 [2024-11-19 17:20:18.639393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1190 with addr=10.0.0.3, port=4421 00:21:46.560 [2024-11-19 17:20:18.639410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1190 is same with the state(6) to be set 00:21:46.560 [2024-11-19 17:20:18.639851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e1190 (9): Bad file descriptor 00:21:46.560 [2024-11-19 17:20:18.640166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:21:46.560 [2024-11-19 17:20:18.640192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:21:46.560 [2024-11-19 17:20:18.640208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:21:46.560 [2024-11-19 17:20:18.640223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:21:46.560 [2024-11-19 17:20:18.640238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:46.560 7856.67 IOPS, 30.69 MiB/s [2024-11-19T17:20:39.093Z] 7917.46 IOPS, 30.93 MiB/s [2024-11-19T17:20:39.093Z] 7982.82 IOPS, 31.18 MiB/s [2024-11-19T17:20:39.093Z] 8048.67 IOPS, 31.44 MiB/s [2024-11-19T17:20:39.093Z] 8109.15 IOPS, 31.68 MiB/s [2024-11-19T17:20:39.093Z] 8167.20 IOPS, 31.90 MiB/s [2024-11-19T17:20:39.093Z] 8224.10 IOPS, 32.13 MiB/s [2024-11-19T17:20:39.093Z] 8279.53 IOPS, 32.34 MiB/s [2024-11-19T17:20:39.093Z] 8329.55 IOPS, 32.54 MiB/s [2024-11-19T17:20:39.093Z] 8375.33 IOPS, 32.72 MiB/s [2024-11-19T17:20:39.093Z] [2024-11-19 17:20:28.700766] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:21:46.560 8416.65 IOPS, 32.88 MiB/s [2024-11-19T17:20:39.093Z] 8445.98 IOPS, 32.99 MiB/s [2024-11-19T17:20:39.093Z] 8473.96 IOPS, 33.10 MiB/s [2024-11-19T17:20:39.093Z] 8504.33 IOPS, 33.22 MiB/s [2024-11-19T17:20:39.093Z] 8532.26 IOPS, 33.33 MiB/s [2024-11-19T17:20:39.093Z] 8559.63 IOPS, 33.44 MiB/s [2024-11-19T17:20:39.093Z] 8583.12 IOPS, 33.53 MiB/s [2024-11-19T17:20:39.093Z] 8606.94 IOPS, 33.62 MiB/s [2024-11-19T17:20:39.093Z] 8630.63 IOPS, 33.71 MiB/s [2024-11-19T17:20:39.093Z] 8653.38 IOPS, 33.80 MiB/s [2024-11-19T17:20:39.093Z] Received shutdown signal, test time was about 55.558841 seconds 00:21:46.560 00:21:46.560 Latency(us) 00:21:46.560 [2024-11-19T17:20:39.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.560 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:46.560 Verification LBA range: start 0x0 length 0x4000 00:21:46.560 Nvme0n1 : 55.56 8663.76 33.84 0.00 0.00 14746.45 983.04 7015926.69 00:21:46.560 [2024-11-19T17:20:39.093Z] =================================================================================================================== 00:21:46.560 [2024-11-19T17:20:39.093Z] Total : 8663.76 33.84 0.00 0.00 14746.45 983.04 7015926.69 00:21:46.560 17:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:46.827 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:21:46.827 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:46.827 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:21:46.827 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:46.827 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:21:46.827 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:46.827 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:21:46.827 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:46.827 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:46.827 rmmod nvme_tcp 00:21:46.827 rmmod nvme_fabrics 00:21:46.827 rmmod nvme_keyring 00:21:46.827 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:46.827 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:21:46.827 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:21:46.827 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 95844 ']' 00:21:46.828 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 95844 00:21:46.828 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 95844 ']' 00:21:46.828 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 95844 00:21:46.828 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:21:46.828 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:47.091 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95844 00:21:47.091 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:47.091 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:47.091 killing process with pid 95844 00:21:47.091 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95844' 00:21:47.091 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 95844 00:21:47.091 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 95844 00:21:47.349 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:47.349 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:47.349 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:47.349 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:21:47.349 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:21:47.349 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:47.349 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:21:47.350 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:47.350 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:47.350 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:47.350 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:47.350 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:47.350 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:47.350 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:47.350 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:47.350 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:47.350 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:47.350 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:47.350 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:47.350 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:47.350 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:47.350 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:47.608 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:47.608 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.608 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.608 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.608 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:21:47.608 00:21:47.608 real 1m1.988s 00:21:47.608 user 2m53.379s 00:21:47.608 sys 0m14.662s 00:21:47.608 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:47.608 ************************************ 00:21:47.608 END TEST nvmf_host_multipath 00:21:47.608 ************************************ 00:21:47.608 17:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:47.608 17:20:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:47.608 17:20:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:47.608 17:20:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:47.608 17:20:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.608 ************************************ 00:21:47.608 START TEST nvmf_timeout 00:21:47.608 ************************************ 00:21:47.608 17:20:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:47.608 * Looking for test storage... 00:21:47.608 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:47.608 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:47.608 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:21:47.608 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:47.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.868 --rc genhtml_branch_coverage=1 00:21:47.868 --rc genhtml_function_coverage=1 00:21:47.868 --rc genhtml_legend=1 00:21:47.868 --rc geninfo_all_blocks=1 00:21:47.868 --rc geninfo_unexecuted_blocks=1 00:21:47.868 00:21:47.868 ' 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:47.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.868 --rc genhtml_branch_coverage=1 00:21:47.868 --rc genhtml_function_coverage=1 00:21:47.868 --rc genhtml_legend=1 00:21:47.868 --rc geninfo_all_blocks=1 00:21:47.868 --rc geninfo_unexecuted_blocks=1 00:21:47.868 00:21:47.868 ' 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:47.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.868 --rc genhtml_branch_coverage=1 00:21:47.868 --rc genhtml_function_coverage=1 00:21:47.868 --rc genhtml_legend=1 00:21:47.868 --rc geninfo_all_blocks=1 00:21:47.868 --rc geninfo_unexecuted_blocks=1 00:21:47.868 00:21:47.868 ' 00:21:47.868 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:47.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.868 --rc genhtml_branch_coverage=1 00:21:47.868 --rc genhtml_function_coverage=1 00:21:47.868 --rc genhtml_legend=1 00:21:47.868 --rc geninfo_all_blocks=1 00:21:47.868 --rc geninfo_unexecuted_blocks=1 00:21:47.869 00:21:47.869 ' 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:47.869 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:47.869 Cannot find device "nvmf_init_br" 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:47.869 Cannot find device "nvmf_init_br2" 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:47.869 Cannot find device "nvmf_tgt_br" 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:47.869 Cannot find device "nvmf_tgt_br2" 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:47.869 Cannot find device "nvmf_init_br" 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:47.869 Cannot find device "nvmf_init_br2" 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:47.869 Cannot find device "nvmf_tgt_br" 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:47.869 Cannot find device "nvmf_tgt_br2" 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:47.869 Cannot find device "nvmf_br" 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:21:47.869 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:47.869 Cannot find device "nvmf_init_if" 00:21:47.870 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:21:47.870 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:47.870 Cannot find device "nvmf_init_if2" 00:21:47.870 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:21:47.870 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:47.870 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:47.870 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:21:47.870 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:47.870 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:47.870 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:21:47.870 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:47.870 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:47.870 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:47.870 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:48.129 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:48.129 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:21:48.129 00:21:48.129 --- 10.0.0.3 ping statistics --- 00:21:48.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.129 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:48.129 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:48.129 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:21:48.129 00:21:48.129 --- 10.0.0.4 ping statistics --- 00:21:48.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.129 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:48.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:48.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:21:48.129 00:21:48.129 --- 10.0.0.1 ping statistics --- 00:21:48.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.129 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:48.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:48.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:21:48.129 00:21:48.129 --- 10.0.0.2 ping statistics --- 00:21:48.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.129 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=97271 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 97271 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 97271 ']' 00:21:48.129 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.130 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:48.130 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.130 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:48.130 17:20:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:48.388 [2024-11-19 17:20:40.711308] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:21:48.388 [2024-11-19 17:20:40.711403] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.388 [2024-11-19 17:20:40.854327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:48.388 [2024-11-19 17:20:40.899523] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.388 [2024-11-19 17:20:40.899582] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.388 [2024-11-19 17:20:40.899591] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.388 [2024-11-19 17:20:40.899599] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.388 [2024-11-19 17:20:40.899605] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.388 [2024-11-19 17:20:40.900839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.388 [2024-11-19 17:20:40.900859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.325 17:20:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.325 17:20:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:21:49.325 17:20:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:49.325 17:20:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:49.325 17:20:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:49.325 17:20:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.325 17:20:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:49.325 17:20:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:49.584 [2024-11-19 17:20:41.994303] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.584 17:20:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:49.843 Malloc0 00:21:49.843 17:20:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:50.101 17:20:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:50.360 17:20:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:50.619 [2024-11-19 17:20:42.925803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:50.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:50.619 17:20:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:21:50.619 17:20:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=97358 00:21:50.619 17:20:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 97358 /var/tmp/bdevperf.sock 00:21:50.619 17:20:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 97358 ']' 00:21:50.619 17:20:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:50.619 17:20:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:50.619 17:20:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:50.619 17:20:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:50.619 17:20:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:50.619 [2024-11-19 17:20:42.986904] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:21:50.619 [2024-11-19 17:20:42.987005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97358 ] 00:21:50.619 [2024-11-19 17:20:43.134122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.878 [2024-11-19 17:20:43.196546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:50.878 17:20:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:50.878 17:20:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:21:50.878 17:20:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:51.137 17:20:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:51.396 NVMe0n1 00:21:51.396 17:20:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:51.396 17:20:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=97392 00:21:51.396 17:20:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:21:51.654 Running I/O for 10 seconds... 00:21:52.595 17:20:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:52.595 10647.00 IOPS, 41.59 MiB/s [2024-11-19T17:20:45.128Z] [2024-11-19 17:20:45.075089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.595 [2024-11-19 17:20:45.075176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.595 [2024-11-19 17:20:45.075188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.595 [2024-11-19 17:20:45.075209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.595 [2024-11-19 17:20:45.075216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.595 [2024-11-19 17:20:45.075224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.595 [2024-11-19 17:20:45.075232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.595 [2024-11-19 17:20:45.075240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.595 [2024-11-19 17:20:45.075248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.595 [2024-11-19 17:20:45.075256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.595 [2024-11-19 17:20:45.075263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.595 [2024-11-19 17:20:45.075271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.595 [2024-11-19 17:20:45.075281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.595 [2024-11-19 17:20:45.075288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.595 [2024-11-19 17:20:45.075296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.596 [2024-11-19 17:20:45.075987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.597 [2024-11-19 17:20:45.075994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.597 [2024-11-19 17:20:45.076002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.597 [2024-11-19 17:20:45.076009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.597 [2024-11-19 17:20:45.076025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.597 [2024-11-19 17:20:45.076033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b6b0 is same with the state(6) to be set 00:21:52.597 [2024-11-19 17:20:45.077648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.077705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.077728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.077739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.077780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.077792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.077803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.077812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.077824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.077833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.077844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.077854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.077865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.077874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.077886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.077895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.077906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.077915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.077926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.077935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.078360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.078388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.078402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.078413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.078424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.078434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.078445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.078454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.078466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.078476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.078487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.078497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.078508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.078518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.078529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.078961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.078979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.078989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.079001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.079011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.079022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.079032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.079044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.079053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.079064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.079074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.079085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.079094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.079493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.079515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.079528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.079665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.079806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.080041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.080086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.080099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.080111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.080120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.080132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.080141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.080153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.080162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.080174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.080183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.080202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.080212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.080371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.080491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.080505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.080514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.080644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.080657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.597 [2024-11-19 17:20:45.080884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.597 [2024-11-19 17:20:45.080909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.080921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.080931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.080943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.080953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.080964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.080973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.080984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.080994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.081005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.081014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.081151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.081165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.081177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.081187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.081435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.081450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.081461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.081471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.081482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.081491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.081502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.081512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.081523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.081842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.081856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.081865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.081877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.081887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.081899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.081908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.081919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.081929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.081939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.082118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.082375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.082389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.082400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.082424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.082437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.082446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.082457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.082466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.082478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.082828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.082843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.082853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.082864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.082873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.082884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.082893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.082904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.082913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.082924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.082933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.083089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.083222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.083236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.083453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.083468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.083478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.083490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.083499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.083510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.083519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.083962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.083991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.084004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.084013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.084024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.084034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.084045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.084054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.084076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.084085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.084097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.084106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.084284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.598 [2024-11-19 17:20:45.084407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.598 [2024-11-19 17:20:45.084420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.599 [2024-11-19 17:20:45.084537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.084555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.599 [2024-11-19 17:20:45.084565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.084576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.599 [2024-11-19 17:20:45.084803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.084818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.599 [2024-11-19 17:20:45.084828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.084839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.599 [2024-11-19 17:20:45.084848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.084859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.599 [2024-11-19 17:20:45.084868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.084878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.599 [2024-11-19 17:20:45.085038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.085051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.599 [2024-11-19 17:20:45.085059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.085255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.599 [2024-11-19 17:20:45.085270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.085281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.599 [2024-11-19 17:20:45.085290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.085300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.599 [2024-11-19 17:20:45.085309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.085320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.599 [2024-11-19 17:20:45.085562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.085586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.599 [2024-11-19 17:20:45.085596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.085607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.599 [2024-11-19 17:20:45.085617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.085627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.599 [2024-11-19 17:20:45.085899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.085914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.599 [2024-11-19 17:20:45.085924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.085934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.599 [2024-11-19 17:20:45.085944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.085955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.599 [2024-11-19 17:20:45.085964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.085975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.599 [2024-11-19 17:20:45.085984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.085994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.599 [2024-11-19 17:20:45.086003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.086332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.599 [2024-11-19 17:20:45.086353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.086365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.599 [2024-11-19 17:20:45.086374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.086384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.599 [2024-11-19 17:20:45.086393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.086404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.599 [2024-11-19 17:20:45.086429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.086439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.599 [2024-11-19 17:20:45.086448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.086458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.599 [2024-11-19 17:20:45.086467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.086478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.599 [2024-11-19 17:20:45.086581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.086594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.599 [2024-11-19 17:20:45.086603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.086847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.599 [2024-11-19 17:20:45.086874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.086887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.599 [2024-11-19 17:20:45.086896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.086908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.599 [2024-11-19 17:20:45.086917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.086928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.599 [2024-11-19 17:20:45.086936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.086947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.599 [2024-11-19 17:20:45.086955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.086966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.599 [2024-11-19 17:20:45.086974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.087157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.599 [2024-11-19 17:20:45.087315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.087559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.599 [2024-11-19 17:20:45.087573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.087585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.599 [2024-11-19 17:20:45.087594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.087605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.599 [2024-11-19 17:20:45.087614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.087625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.599 [2024-11-19 17:20:45.087634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.599 [2024-11-19 17:20:45.087777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.600 [2024-11-19 17:20:45.088003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.600 [2024-11-19 17:20:45.088018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.600 [2024-11-19 17:20:45.088028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.600 [2024-11-19 17:20:45.088038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.600 [2024-11-19 17:20:45.088047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.600 [2024-11-19 17:20:45.088084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.600 [2024-11-19 17:20:45.088364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.600 [2024-11-19 17:20:45.088403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.600 [2024-11-19 17:20:45.088415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.600 [2024-11-19 17:20:45.088425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.600 [2024-11-19 17:20:45.088435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.600 [2024-11-19 17:20:45.088445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.600 [2024-11-19 17:20:45.088454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.600 [2024-11-19 17:20:45.088465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.600 [2024-11-19 17:20:45.088473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.600 [2024-11-19 17:20:45.088483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.600 [2024-11-19 17:20:45.088492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.600 [2024-11-19 17:20:45.088753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.600 [2024-11-19 17:20:45.088783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.600 [2024-11-19 17:20:45.088795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.600 [2024-11-19 17:20:45.088804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.600 [2024-11-19 17:20:45.088815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.600 [2024-11-19 17:20:45.088823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.600 [2024-11-19 17:20:45.088853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:52.600 [2024-11-19 17:20:45.089161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:52.600 [2024-11-19 17:20:45.089171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97184 len:8 PRP1 0x0 PRP2 0x0 00:21:52.600 [2024-11-19 17:20:45.089181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.600 [2024-11-19 17:20:45.089530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.600 [2024-11-19 17:20:45.089554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.600 [2024-11-19 17:20:45.089566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.600 [2024-11-19 17:20:45.089574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.600 [2024-11-19 17:20:45.089584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.600 [2024-11-19 17:20:45.089593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.600 [2024-11-19 17:20:45.089603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.600 [2024-11-19 17:20:45.089611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.600 [2024-11-19 17:20:45.089620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628f50 is same with the state(6) to be set 00:21:52.600 [2024-11-19 17:20:45.090198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:52.600 [2024-11-19 17:20:45.090249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1628f50 (9): Bad file descriptor 00:21:52.600 [2024-11-19 17:20:45.090509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.600 [2024-11-19 17:20:45.090542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628f50 with addr=10.0.0.3, port=4420 00:21:52.600 [2024-11-19 17:20:45.090555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628f50 is same with the state(6) to be set 00:21:52.600 [2024-11-19 17:20:45.090575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1628f50 (9): Bad file descriptor 00:21:52.600 [2024-11-19 17:20:45.090591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:52.600 [2024-11-19 17:20:45.090601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:52.600 [2024-11-19 17:20:45.090948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:52.600 [2024-11-19 17:20:45.090961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:52.600 [2024-11-19 17:20:45.090972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:52.600 17:20:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:21:54.472 6010.50 IOPS, 23.48 MiB/s [2024-11-19T17:20:47.263Z] 4007.00 IOPS, 15.65 MiB/s [2024-11-19T17:20:47.263Z] [2024-11-19 17:20:47.091086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.730 [2024-11-19 17:20:47.091167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628f50 with addr=10.0.0.3, port=4420 00:21:54.730 [2024-11-19 17:20:47.091182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628f50 is same with the state(6) to be set 00:21:54.730 [2024-11-19 17:20:47.091205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1628f50 (9): Bad file descriptor 00:21:54.730 [2024-11-19 17:20:47.091223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:54.731 [2024-11-19 17:20:47.091233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:54.731 [2024-11-19 17:20:47.091242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:54.731 [2024-11-19 17:20:47.091253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:54.731 [2024-11-19 17:20:47.091264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:54.731 17:20:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:21:54.731 17:20:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:54.731 17:20:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:21:54.989 17:20:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:21:54.989 17:20:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:21:54.989 17:20:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:21:54.989 17:20:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:21:55.247 17:20:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:21:55.247 17:20:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:21:56.441 3005.25 IOPS, 11.74 MiB/s [2024-11-19T17:20:49.233Z] 2404.20 IOPS, 9.39 MiB/s [2024-11-19T17:20:49.233Z] [2024-11-19 17:20:49.091380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.700 [2024-11-19 17:20:49.091466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628f50 with addr=10.0.0.3, port=4420 00:21:56.700 [2024-11-19 17:20:49.091483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628f50 is same with the state(6) to be set 00:21:56.700 [2024-11-19 17:20:49.091506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1628f50 (9): Bad file descriptor 00:21:56.700 [2024-11-19 17:20:49.091524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:56.700 [2024-11-19 17:20:49.091534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:56.700 [2024-11-19 17:20:49.091544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:56.700 [2024-11-19 17:20:49.091565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:56.700 [2024-11-19 17:20:49.091575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:58.570 2003.50 IOPS, 7.83 MiB/s [2024-11-19T17:20:51.103Z] 1717.29 IOPS, 6.71 MiB/s [2024-11-19T17:20:51.103Z] [2024-11-19 17:20:51.091606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:58.570 [2024-11-19 17:20:51.091666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:58.570 [2024-11-19 17:20:51.091694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:58.570 [2024-11-19 17:20:51.091703] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:21:58.570 [2024-11-19 17:20:51.091714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:59.764 1502.62 IOPS, 5.87 MiB/s 00:21:59.764 Latency(us) 00:21:59.764 [2024-11-19T17:20:52.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.764 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:59.764 Verification LBA range: start 0x0 length 0x4000 00:21:59.764 NVMe0n1 : 8.15 1475.06 5.76 15.71 0.00 85885.63 1876.71 7046430.72 00:21:59.764 [2024-11-19T17:20:52.297Z] =================================================================================================================== 00:21:59.764 [2024-11-19T17:20:52.297Z] Total : 1475.06 5.76 15.71 0.00 85885.63 1876.71 7046430.72 00:21:59.764 { 00:21:59.764 "results": [ 00:21:59.764 { 00:21:59.764 "job": "NVMe0n1", 00:21:59.764 "core_mask": "0x4", 00:21:59.764 "workload": "verify", 00:21:59.764 "status": "finished", 00:21:59.764 "verify_range": { 00:21:59.764 "start": 0, 00:21:59.764 "length": 16384 00:21:59.764 }, 00:21:59.764 "queue_depth": 128, 00:21:59.764 "io_size": 4096, 00:21:59.764 "runtime": 8.149496, 00:21:59.764 "iops": 1475.060543621348, 00:21:59.764 "mibps": 5.7619552485208905, 00:21:59.764 "io_failed": 128, 00:21:59.764 "io_timeout": 0, 00:21:59.764 "avg_latency_us": 85885.63135012983, 00:21:59.764 "min_latency_us": 1876.7127272727273, 00:21:59.764 "max_latency_us": 7046430.72 00:21:59.764 } 00:21:59.764 ], 00:21:59.764 "core_count": 1 00:21:59.764 } 00:22:00.331 17:20:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:22:00.331 17:20:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:00.331 17:20:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:00.590 17:20:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:22:00.590 17:20:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:22:00.590 17:20:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:00.590 17:20:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:00.851 17:20:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:22:00.851 17:20:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 97392 00:22:00.851 17:20:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 97358 00:22:00.851 17:20:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 97358 ']' 00:22:00.851 17:20:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 97358 00:22:00.851 17:20:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:00.851 17:20:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:00.851 17:20:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97358 00:22:00.851 killing process with pid 97358 00:22:00.851 Received shutdown signal, test time was about 9.372629 seconds 00:22:00.851 00:22:00.851 Latency(us) 00:22:00.851 [2024-11-19T17:20:53.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.851 [2024-11-19T17:20:53.384Z] =================================================================================================================== 00:22:00.851 [2024-11-19T17:20:53.384Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:00.851 17:20:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:00.851 17:20:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:00.851 17:20:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97358' 00:22:00.851 17:20:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 97358 00:22:00.851 17:20:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 97358 00:22:01.153 17:20:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:01.432 [2024-11-19 17:20:53.690685] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:01.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:01.432 17:20:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:01.432 17:20:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=97546 00:22:01.432 17:20:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 97546 /var/tmp/bdevperf.sock 00:22:01.433 17:20:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 97546 ']' 00:22:01.433 17:20:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:01.433 17:20:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:01.433 17:20:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:01.433 17:20:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:01.433 17:20:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:01.433 [2024-11-19 17:20:53.757822] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:22:01.433 [2024-11-19 17:20:53.757915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97546 ] 00:22:01.433 [2024-11-19 17:20:53.899580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.433 [2024-11-19 17:20:53.955961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.366 17:20:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.366 17:20:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:02.366 17:20:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:02.624 17:20:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:02.883 NVMe0n1 00:22:02.883 17:20:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:02.883 17:20:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=97598 00:22:02.883 17:20:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:22:02.883 Running I/O for 10 seconds... 00:22:03.818 17:20:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:04.079 10806.00 IOPS, 42.21 MiB/s [2024-11-19T17:20:56.612Z] [2024-11-19 17:20:56.522022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.079 [2024-11-19 17:20:56.522100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.079 [2024-11-19 17:20:56.522142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.079 [2024-11-19 17:20:56.522152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.079 [2024-11-19 17:20:56.522163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.079 [2024-11-19 17:20:56.522172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.079 [2024-11-19 17:20:56.522195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.079 [2024-11-19 17:20:56.522220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.079 [2024-11-19 17:20:56.522233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.079 [2024-11-19 17:20:56.522243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.079 [2024-11-19 17:20:56.522254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.079 [2024-11-19 17:20:56.522264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.079 [2024-11-19 17:20:56.522275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.079 [2024-11-19 17:20:56.522284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.079 [2024-11-19 17:20:56.522295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.079 [2024-11-19 17:20:56.522304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.079 [2024-11-19 17:20:56.522316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.079 [2024-11-19 17:20:56.522325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.079 [2024-11-19 17:20:56.522336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.079 [2024-11-19 17:20:56.522345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.079 [2024-11-19 17:20:56.522356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.079 [2024-11-19 17:20:56.522366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.079 [2024-11-19 17:20:56.522377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.079 [2024-11-19 17:20:56.522387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.079 [2024-11-19 17:20:56.522398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.522408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.522790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.522805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.522816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.522826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.522899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.522909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.522922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.522931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.522943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.522953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.522964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.523261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.523286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.523297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.523308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.523318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.523329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.523338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.523350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.523359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.523370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.523380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.523392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.523401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.523412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.523422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.523543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.523565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.523577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.523587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.523985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.524012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.524025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.524035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.524047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.524057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.524085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.524095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.524107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.524117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.524128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.524138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.524149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.524159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.524170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.524179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.524190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.524589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.524614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.524625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.524636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.524645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.524656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.524666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.524677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.524686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.524697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.524706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.524717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.080 [2024-11-19 17:20:56.524728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.524752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.080 [2024-11-19 17:20:56.524863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.524883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.080 [2024-11-19 17:20:56.524893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.524904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.080 [2024-11-19 17:20:56.524913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.524924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.080 [2024-11-19 17:20:56.525297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.525319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.080 [2024-11-19 17:20:56.525330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.525342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.080 [2024-11-19 17:20:56.525351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.525362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.080 [2024-11-19 17:20:56.525371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.525381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:102152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.080 [2024-11-19 17:20:56.525390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.080 [2024-11-19 17:20:56.525401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.080 [2024-11-19 17:20:56.525685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.525765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:102168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.525779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.525790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:102176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.525799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.525811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.525820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.525831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.525840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.525851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:102200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.525860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.526147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:102208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.526170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.526184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:102216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.526195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.526207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:102224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.526216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.526226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:102232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.526236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.526246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:102240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.526255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.526266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:102248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.526375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.526393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:102256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.526403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.526414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.526423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.526433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.526515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.526529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.081 [2024-11-19 17:20:56.526538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.526550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.081 [2024-11-19 17:20:56.526559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.526571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.081 [2024-11-19 17:20:56.526702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.526719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.081 [2024-11-19 17:20:56.526933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.526958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.081 [2024-11-19 17:20:56.526968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.526979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.081 [2024-11-19 17:20:56.526989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.527127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.081 [2024-11-19 17:20:56.527258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.527338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.527354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.527366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.527383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.527394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:102296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.527403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.527414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.527559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.527936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:102312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.527957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.527969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.527979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.527990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.527999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.528010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.528019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.528030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.528039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.528049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.528334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.528466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.528485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.528704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.528722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.528749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.528761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.528772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.528781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.528792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.528801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.528812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.528821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.528832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:102408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.528980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.529088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.081 [2024-11-19 17:20:56.529099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.081 [2024-11-19 17:20:56.529111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.082 [2024-11-19 17:20:56.529126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.082 [2024-11-19 17:20:56.529387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.082 [2024-11-19 17:20:56.529406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.082 [2024-11-19 17:20:56.529485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:102440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.082 [2024-11-19 17:20:56.529499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.082 [2024-11-19 17:20:56.529510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:102448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.082 [2024-11-19 17:20:56.529519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.082 [2024-11-19 17:20:56.529530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:102456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.082 [2024-11-19 17:20:56.529540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.082 [2024-11-19 17:20:56.529551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.082 [2024-11-19 17:20:56.529685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.082 [2024-11-19 17:20:56.529798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:102472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.082 [2024-11-19 17:20:56.529811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.082 [2024-11-19 17:20:56.529823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.082 [2024-11-19 17:20:56.529832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.082 [2024-11-19 17:20:56.529843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.082 [2024-11-19 17:20:56.529853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.082 [2024-11-19 17:20:56.529864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.082 [2024-11-19 17:20:56.529874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.082 [2024-11-19 17:20:56.530189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.082 [2024-11-19 17:20:56.530208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.082 [2024-11-19 17:20:56.530221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.082 [2024-11-19 17:20:56.530231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.082 [2024-11-19 17:20:56.530243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:102520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.082 [2024-11-19 17:20:56.530253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.082 [2024-11-19 17:20:56.530264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.082 [2024-11-19 17:20:56.530273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.082 [2024-11-19 17:20:56.530553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.082 [2024-11-19 17:20:56.530574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101880 len:8 PRP1 0x0 PRP2 0x0 00:22:04.082 [2024-11-19 17:20:56.530585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.082 [2024-11-19 17:20:56.530599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.082 [2024-11-19 17:20:56.530607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.082 [2024-11-19 17:20:56.530615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101888 len:8 PRP1 0x0 PRP2 0x0 00:22:04.082 [2024-11-19 17:20:56.530911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.082 [2024-11-19 17:20:56.531032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.082 [2024-11-19 17:20:56.531048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.082 [2024-11-19 17:20:56.531057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101896 len:8 PRP1 0x0 PRP2 0x0 00:22:04.082 [2024-11-19 17:20:56.531066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.082 [2024-11-19 17:20:56.531077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.082 [2024-11-19 17:20:56.531084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.082 [2024-11-19 17:20:56.531092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101904 len:8 PRP1 0x0 PRP2 0x0 00:22:04.082 [2024-11-19 17:20:56.531106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.082 [2024-11-19 17:20:56.531442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.082 [2024-11-19 17:20:56.531453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.082 [2024-11-19 17:20:56.531462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101912 len:8 PRP1 0x0 PRP2 0x0 00:22:04.082 [2024-11-19 17:20:56.531471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.082 [2024-11-19 17:20:56.531481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.082 [2024-11-19 17:20:56.531488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.082 [2024-11-19 17:20:56.531504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101920 len:8 PRP1 0x0 PRP2 0x0 00:22:04.082 [2024-11-19 17:20:56.531513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.082 [2024-11-19 17:20:56.531523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.082 [2024-11-19 17:20:56.531530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.082 [2024-11-19 17:20:56.531538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101928 len:8 PRP1 0x0 PRP2 0x0 00:22:04.082 [2024-11-19 17:20:56.531547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.082 [2024-11-19 17:20:56.531556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.082 [2024-11-19 17:20:56.531563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.082 [2024-11-19 17:20:56.531571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101936 len:8 PRP1 0x0 PRP2 0x0 00:22:04.082 [2024-11-19 17:20:56.531579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.082 [2024-11-19 17:20:56.531828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.082 [2024-11-19 17:20:56.531839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.082 [2024-11-19 17:20:56.531847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101944 len:8 PRP1 0x0 PRP2 0x0 00:22:04.082 [2024-11-19 17:20:56.531856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.082 [2024-11-19 17:20:56.531865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.082 [2024-11-19 17:20:56.531873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.082 [2024-11-19 17:20:56.531881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101952 len:8 PRP1 0x0 PRP2 0x0 00:22:04.082 [2024-11-19 17:20:56.531890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.082 [2024-11-19 17:20:56.531899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.082 [2024-11-19 17:20:56.531906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.082 [2024-11-19 17:20:56.531914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101960 len:8 PRP1 0x0 PRP2 0x0 00:22:04.082 [2024-11-19 17:20:56.531923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.082 [2024-11-19 17:20:56.531932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.082 [2024-11-19 17:20:56.531939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.082 [2024-11-19 17:20:56.532229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101968 len:8 PRP1 0x0 PRP2 0x0 00:22:04.082 [2024-11-19 17:20:56.532247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.082 [2024-11-19 17:20:56.532259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.082 [2024-11-19 17:20:56.532267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.082 [2024-11-19 17:20:56.532275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101976 len:8 PRP1 0x0 PRP2 0x0 00:22:04.082 [2024-11-19 17:20:56.532283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.083 [2024-11-19 17:20:56.532292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.083 [2024-11-19 17:20:56.532300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.083 [2024-11-19 17:20:56.532308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101984 len:8 PRP1 0x0 PRP2 0x0 00:22:04.083 [2024-11-19 17:20:56.532317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.083 [2024-11-19 17:20:56.532326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.083 [2024-11-19 17:20:56.532333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.083 [2024-11-19 17:20:56.532341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101992 len:8 PRP1 0x0 PRP2 0x0 00:22:04.083 [2024-11-19 17:20:56.532349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.083 [2024-11-19 17:20:56.532358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.083 [2024-11-19 17:20:56.532676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.083 [2024-11-19 17:20:56.532687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102000 len:8 PRP1 0x0 PRP2 0x0 00:22:04.083 [2024-11-19 17:20:56.532697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.083 [2024-11-19 17:20:56.532708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.083 [2024-11-19 17:20:56.532715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.083 [2024-11-19 17:20:56.532723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102008 len:8 PRP1 0x0 PRP2 0x0 00:22:04.083 [2024-11-19 17:20:56.532731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.083 [2024-11-19 17:20:56.532754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.083 [2024-11-19 17:20:56.532763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.083 [2024-11-19 17:20:56.532770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102016 len:8 PRP1 0x0 PRP2 0x0 00:22:04.083 [2024-11-19 17:20:56.532779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.083 [2024-11-19 17:20:56.533103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.083 [2024-11-19 17:20:56.533124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.083 [2024-11-19 17:20:56.533149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102024 len:8 PRP1 0x0 PRP2 0x0 00:22:04.083 [2024-11-19 17:20:56.533158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.083 [2024-11-19 17:20:56.533167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.083 [2024-11-19 17:20:56.533174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.083 [2024-11-19 17:20:56.533183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102032 len:8 PRP1 0x0 PRP2 0x0 00:22:04.083 [2024-11-19 17:20:56.533192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.083 [2024-11-19 17:20:56.533200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.083 [2024-11-19 17:20:56.533207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.083 [2024-11-19 17:20:56.533215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102040 len:8 PRP1 0x0 PRP2 0x0 00:22:04.083 [2024-11-19 17:20:56.533223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.083 [2024-11-19 17:20:56.533232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.083 [2024-11-19 17:20:56.533239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.083 [2024-11-19 17:20:56.533513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102048 len:8 PRP1 0x0 PRP2 0x0 00:22:04.083 [2024-11-19 17:20:56.533524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.083 [2024-11-19 17:20:56.533647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.083 [2024-11-19 17:20:56.533656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.083 [2024-11-19 17:20:56.533666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102056 len:8 PRP1 0x0 PRP2 0x0 00:22:04.083 [2024-11-19 17:20:56.533822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.083 [2024-11-19 17:20:56.534246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.083 [2024-11-19 17:20:56.534271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.083 [2024-11-19 17:20:56.534283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.083 [2024-11-19 17:20:56.534293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.083 [2024-11-19 17:20:56.534304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.083 [2024-11-19 17:20:56.534313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.083 [2024-11-19 17:20:56.534323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.083 [2024-11-19 17:20:56.534331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.083 [2024-11-19 17:20:56.534341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fef50 is same with the state(6) to be set 00:22:04.083 [2024-11-19 17:20:56.534816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:04.083 [2024-11-19 17:20:56.534853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fef50 (9): Bad file descriptor 00:22:04.083 [2024-11-19 17:20:56.535058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.083 [2024-11-19 17:20:56.535091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20fef50 with addr=10.0.0.3, port=4420 00:22:04.083 [2024-11-19 17:20:56.535103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fef50 is same with the state(6) to be set 00:22:04.083 [2024-11-19 17:20:56.535123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fef50 (9): Bad file descriptor 00:22:04.083 [2024-11-19 17:20:56.535140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:04.083 [2024-11-19 17:20:56.535150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:04.083 [2024-11-19 17:20:56.535160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:04.083 [2024-11-19 17:20:56.535171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:04.083 [2024-11-19 17:20:56.535303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:04.083 17:20:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:22:05.019 6344.00 IOPS, 24.78 MiB/s [2024-11-19T17:20:57.552Z] [2024-11-19 17:20:57.535646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.019 [2024-11-19 17:20:57.535752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20fef50 with addr=10.0.0.3, port=4420 00:22:05.019 [2024-11-19 17:20:57.535789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fef50 is same with the state(6) to be set 00:22:05.019 [2024-11-19 17:20:57.535815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fef50 (9): Bad file descriptor 00:22:05.019 [2024-11-19 17:20:57.535834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:05.019 [2024-11-19 17:20:57.535843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:05.019 [2024-11-19 17:20:57.535854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:05.019 [2024-11-19 17:20:57.535865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:05.019 [2024-11-19 17:20:57.535877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:05.019 17:20:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:05.278 [2024-11-19 17:20:57.760300] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:05.278 17:20:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 97598 00:22:06.103 4229.33 IOPS, 16.52 MiB/s [2024-11-19T17:20:58.636Z] [2024-11-19 17:20:58.554796] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:07.972 3172.00 IOPS, 12.39 MiB/s [2024-11-19T17:21:01.441Z] 4154.60 IOPS, 16.23 MiB/s [2024-11-19T17:21:02.376Z] 5215.67 IOPS, 20.37 MiB/s [2024-11-19T17:21:03.751Z] 5896.14 IOPS, 23.03 MiB/s [2024-11-19T17:21:04.687Z] 6366.88 IOPS, 24.87 MiB/s [2024-11-19T17:21:05.623Z] 6828.44 IOPS, 26.67 MiB/s [2024-11-19T17:21:05.623Z] 7206.00 IOPS, 28.15 MiB/s 00:22:13.090 Latency(us) 00:22:13.090 [2024-11-19T17:21:05.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.090 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:13.090 Verification LBA range: start 0x0 length 0x4000 00:22:13.090 NVMe0n1 : 10.01 7210.13 28.16 0.00 0.00 17729.07 1787.35 3035150.89 00:22:13.090 [2024-11-19T17:21:05.623Z] =================================================================================================================== 00:22:13.090 [2024-11-19T17:21:05.623Z] Total : 7210.13 28.16 0.00 0.00 17729.07 1787.35 3035150.89 00:22:13.090 { 00:22:13.090 "results": [ 00:22:13.090 { 00:22:13.090 "job": "NVMe0n1", 00:22:13.090 "core_mask": "0x4", 00:22:13.090 "workload": "verify", 00:22:13.090 "status": "finished", 00:22:13.090 "verify_range": { 00:22:13.090 "start": 0, 00:22:13.090 "length": 16384 00:22:13.090 }, 00:22:13.090 "queue_depth": 128, 00:22:13.090 "io_size": 4096, 00:22:13.090 "runtime": 10.006333, 00:22:13.090 "iops": 7210.133822250369, 00:22:13.090 "mibps": 28.164585243165504, 00:22:13.090 "io_failed": 0, 00:22:13.090 "io_timeout": 0, 00:22:13.090 "avg_latency_us": 17729.069664183102, 00:22:13.091 "min_latency_us": 1787.3454545454545, 00:22:13.091 "max_latency_us": 3035150.8945454545 00:22:13.091 } 00:22:13.091 ], 00:22:13.091 "core_count": 1 00:22:13.091 } 00:22:13.091 17:21:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=97716 00:22:13.091 17:21:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:13.091 17:21:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:22:13.091 Running I/O for 10 seconds... 00:22:14.027 17:21:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:14.287 10379.00 IOPS, 40.54 MiB/s [2024-11-19T17:21:06.820Z] [2024-11-19 17:21:06.640202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.287 [2024-11-19 17:21:06.640287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.287 [2024-11-19 17:21:06.640298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.287 [2024-11-19 17:21:06.640306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.287 [2024-11-19 17:21:06.640317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.287 [2024-11-19 17:21:06.640326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.287 [2024-11-19 17:21:06.640342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.287 [2024-11-19 17:21:06.640350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.287 [2024-11-19 17:21:06.640366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.287 [2024-11-19 17:21:06.640373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.287 [2024-11-19 17:21:06.640389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.287 [2024-11-19 17:21:06.640397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.287 [2024-11-19 17:21:06.640405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.287 [2024-11-19 17:21:06.640413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.287 [2024-11-19 17:21:06.640422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.287 [2024-11-19 17:21:06.640430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.287 [2024-11-19 17:21:06.640438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.288 [2024-11-19 17:21:06.640446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.288 [2024-11-19 17:21:06.640454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.288 [2024-11-19 17:21:06.640461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.288 [2024-11-19 17:21:06.640469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.288 [2024-11-19 17:21:06.640477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.288 [2024-11-19 17:21:06.640485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.288 [2024-11-19 17:21:06.640502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.288 [2024-11-19 17:21:06.640510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.288 [2024-11-19 17:21:06.640518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.288 [2024-11-19 17:21:06.640529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.288 [2024-11-19 17:21:06.640537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.288 [2024-11-19 17:21:06.640546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.288 [2024-11-19 17:21:06.640554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.288 [2024-11-19 17:21:06.640562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.288 [2024-11-19 17:21:06.640570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.288 [2024-11-19 17:21:06.640577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.288 [2024-11-19 17:21:06.640586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.288 [2024-11-19 17:21:06.640595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.288 [2024-11-19 17:21:06.640603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.288 [2024-11-19 17:21:06.640610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.288 [2024-11-19 17:21:06.640618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.288 [2024-11-19 17:21:06.640626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.288 [2024-11-19 17:21:06.640633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.288 [2024-11-19 17:21:06.640641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1e90 is same with the state(6) to be set 00:22:14.288 [2024-11-19 17:21:06.642061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.288 [2024-11-19 17:21:06.642150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.288 [2024-11-19 17:21:06.642186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.288 [2024-11-19 17:21:06.642195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.288 [2024-11-19 17:21:06.642206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.288 [2024-11-19 17:21:06.642214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.288 [2024-11-19 17:21:06.642225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.288 [2024-11-19 17:21:06.642233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.288 [2024-11-19 17:21:06.642243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.288 [2024-11-19 17:21:06.642252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.288 [2024-11-19 17:21:06.642262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.288 [2024-11-19 17:21:06.642270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.288 [2024-11-19 17:21:06.642279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.288 [2024-11-19 17:21:06.642288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.288 [2024-11-19 17:21:06.642297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.288 [2024-11-19 17:21:06.642305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.288 [2024-11-19 17:21:06.642315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.288 [2024-11-19 17:21:06.642323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.288 [2024-11-19 17:21:06.642333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.288 [2024-11-19 17:21:06.642341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.288 [2024-11-19 17:21:06.642351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.288 [2024-11-19 17:21:06.642359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.288 [2024-11-19 17:21:06.642369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.288 [2024-11-19 17:21:06.642377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.288 [2024-11-19 17:21:06.642387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.288 [2024-11-19 17:21:06.642395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.288 [2024-11-19 17:21:06.642859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.288 [2024-11-19 17:21:06.642886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.288 [2024-11-19 17:21:06.642898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.288 [2024-11-19 17:21:06.642907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.288 [2024-11-19 17:21:06.642920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.288 [2024-11-19 17:21:06.642930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.288 [2024-11-19 17:21:06.642941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.288 [2024-11-19 17:21:06.642952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.288 [2024-11-19 17:21:06.642964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.288 [2024-11-19 17:21:06.642973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.288 [2024-11-19 17:21:06.642998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.288 [2024-11-19 17:21:06.643008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.288 [2024-11-19 17:21:06.643018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.288 [2024-11-19 17:21:06.643319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.288 [2024-11-19 17:21:06.643342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.288 [2024-11-19 17:21:06.643352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.288 [2024-11-19 17:21:06.643362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.288 [2024-11-19 17:21:06.643371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.288 [2024-11-19 17:21:06.643382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.288 [2024-11-19 17:21:06.643390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.288 [2024-11-19 17:21:06.643401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.288 [2024-11-19 17:21:06.643409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.288 [2024-11-19 17:21:06.643419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:94448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.288 [2024-11-19 17:21:06.643427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.288 [2024-11-19 17:21:06.643437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.288 [2024-11-19 17:21:06.643446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.288 [2024-11-19 17:21:06.643456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.288 [2024-11-19 17:21:06.643464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.288 [2024-11-19 17:21:06.643474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.289 [2024-11-19 17:21:06.643702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.643716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.289 [2024-11-19 17:21:06.643725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.643736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.289 [2024-11-19 17:21:06.643790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.644034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.289 [2024-11-19 17:21:06.644047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.644059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.289 [2024-11-19 17:21:06.644078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.644092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.289 [2024-11-19 17:21:06.644102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.644113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.289 [2024-11-19 17:21:06.644122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.644133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.289 [2024-11-19 17:21:06.644143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.644154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.289 [2024-11-19 17:21:06.644163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.644536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.289 [2024-11-19 17:21:06.644556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.644568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.289 [2024-11-19 17:21:06.644577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.644587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.289 [2024-11-19 17:21:06.644596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.644606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.289 [2024-11-19 17:21:06.644615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.644625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.289 [2024-11-19 17:21:06.644634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.644644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.289 [2024-11-19 17:21:06.644653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.644662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.289 [2024-11-19 17:21:06.644671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.645081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.289 [2024-11-19 17:21:06.645136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.645149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.289 [2024-11-19 17:21:06.645159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.645169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.289 [2024-11-19 17:21:06.645177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.645189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.289 [2024-11-19 17:21:06.645198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.645208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.289 [2024-11-19 17:21:06.645217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.645227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.289 [2024-11-19 17:21:06.645235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.645260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.289 [2024-11-19 17:21:06.645398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.645413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.289 [2024-11-19 17:21:06.645422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.645433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.289 [2024-11-19 17:21:06.645521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.645536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.289 [2024-11-19 17:21:06.645546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.645556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.289 [2024-11-19 17:21:06.645565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.645576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.289 [2024-11-19 17:21:06.645686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.645698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.289 [2024-11-19 17:21:06.645707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.645717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.289 [2024-11-19 17:21:06.645726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.645849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.289 [2024-11-19 17:21:06.645866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.645878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.289 [2024-11-19 17:21:06.645888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.645900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.289 [2024-11-19 17:21:06.646149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.646177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.289 [2024-11-19 17:21:06.646187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.646198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.289 [2024-11-19 17:21:06.646208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.646219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.289 [2024-11-19 17:21:06.646228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.646239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.289 [2024-11-19 17:21:06.646248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.646258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.289 [2024-11-19 17:21:06.646267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.646278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.289 [2024-11-19 17:21:06.646287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.289 [2024-11-19 17:21:06.646571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.289 [2024-11-19 17:21:06.646592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.646604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.290 [2024-11-19 17:21:06.646613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.646624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.290 [2024-11-19 17:21:06.646633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.646643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.290 [2024-11-19 17:21:06.646652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.646663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.290 [2024-11-19 17:21:06.646671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.646683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.290 [2024-11-19 17:21:06.646692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.646702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.290 [2024-11-19 17:21:06.646843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.646863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.290 [2024-11-19 17:21:06.646873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.646885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.290 [2024-11-19 17:21:06.647249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.647273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.290 [2024-11-19 17:21:06.647283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.647293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.290 [2024-11-19 17:21:06.647302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.647313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.290 [2024-11-19 17:21:06.647323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.647333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.290 [2024-11-19 17:21:06.647342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.647352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.290 [2024-11-19 17:21:06.647361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.647371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.290 [2024-11-19 17:21:06.647638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.647656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.290 [2024-11-19 17:21:06.647909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.647932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.290 [2024-11-19 17:21:06.647944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.647965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.290 [2024-11-19 17:21:06.647974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.647985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.290 [2024-11-19 17:21:06.647995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.648006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.290 [2024-11-19 17:21:06.648015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.648026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.290 [2024-11-19 17:21:06.648035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.648047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.290 [2024-11-19 17:21:06.648439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.648457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.290 [2024-11-19 17:21:06.648467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.648479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.290 [2024-11-19 17:21:06.648488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.648514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.290 [2024-11-19 17:21:06.648523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.648534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.290 [2024-11-19 17:21:06.648558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.648569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.290 [2024-11-19 17:21:06.648880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.648909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.290 [2024-11-19 17:21:06.648920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.648931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.290 [2024-11-19 17:21:06.648941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.648952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.290 [2024-11-19 17:21:06.648962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.648973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.290 [2024-11-19 17:21:06.648982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.648993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.290 [2024-11-19 17:21:06.649002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.649012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.290 [2024-11-19 17:21:06.649298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.649369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.290 [2024-11-19 17:21:06.649381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.649392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.290 [2024-11-19 17:21:06.649401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.649413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.290 [2024-11-19 17:21:06.649422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.649433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.290 [2024-11-19 17:21:06.649441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.649453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.290 [2024-11-19 17:21:06.649462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.649473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.290 [2024-11-19 17:21:06.649616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.649905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.290 [2024-11-19 17:21:06.649928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.290 [2024-11-19 17:21:06.649941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.291 [2024-11-19 17:21:06.649951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.291 [2024-11-19 17:21:06.649962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.291 [2024-11-19 17:21:06.649971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.291 [2024-11-19 17:21:06.649982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.291 [2024-11-19 17:21:06.649991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.291 [2024-11-19 17:21:06.650001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.291 [2024-11-19 17:21:06.650011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.291 [2024-11-19 17:21:06.650022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.291 [2024-11-19 17:21:06.650031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.291 [2024-11-19 17:21:06.650330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.291 [2024-11-19 17:21:06.650344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.291 [2024-11-19 17:21:06.650356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.291 [2024-11-19 17:21:06.650365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.291 [2024-11-19 17:21:06.650376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.291 [2024-11-19 17:21:06.650385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.291 [2024-11-19 17:21:06.650396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.291 [2024-11-19 17:21:06.650404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.291 [2024-11-19 17:21:06.650416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.291 [2024-11-19 17:21:06.650425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.291 [2024-11-19 17:21:06.650436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.291 [2024-11-19 17:21:06.650733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.291 [2024-11-19 17:21:06.650767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.291 [2024-11-19 17:21:06.650778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.291 [2024-11-19 17:21:06.650789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.291 [2024-11-19 17:21:06.650798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.291 [2024-11-19 17:21:06.650809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.291 [2024-11-19 17:21:06.650819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.291 [2024-11-19 17:21:06.650844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.291 [2024-11-19 17:21:06.651131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.291 [2024-11-19 17:21:06.651145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.291 [2024-11-19 17:21:06.651155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.291 [2024-11-19 17:21:06.651166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.291 [2024-11-19 17:21:06.651176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.291 [2024-11-19 17:21:06.651187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.291 [2024-11-19 17:21:06.651411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.291 [2024-11-19 17:21:06.651432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.291 [2024-11-19 17:21:06.651443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.291 [2024-11-19 17:21:06.651454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.291 [2024-11-19 17:21:06.651462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.291 [2024-11-19 17:21:06.651778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.291 [2024-11-19 17:21:06.651800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95368 len:8 PRP1 0x0 PRP2 0x0 00:22:14.291 [2024-11-19 17:21:06.651809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.291 [2024-11-19 17:21:06.651823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.291 [2024-11-19 17:21:06.651831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.291 [2024-11-19 17:21:06.651838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95376 len:8 PRP1 0x0 PRP2 0x0 00:22:14.291 [2024-11-19 17:21:06.651847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.291 [2024-11-19 17:21:06.652219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.291 [2024-11-19 17:21:06.652247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.291 [2024-11-19 17:21:06.652259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.291 [2024-11-19 17:21:06.652269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.291 [2024-11-19 17:21:06.652279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.291 [2024-11-19 17:21:06.652288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.291 [2024-11-19 17:21:06.652297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.291 [2024-11-19 17:21:06.652306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.291 [2024-11-19 17:21:06.652315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fef50 is same with the state(6) to be set 00:22:14.291 [2024-11-19 17:21:06.652764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:14.291 [2024-11-19 17:21:06.652796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fef50 (9): Bad file descriptor 00:22:14.291 [2024-11-19 17:21:06.653108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:14.291 [2024-11-19 17:21:06.653139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20fef50 with addr=10.0.0.3, port=4420 00:22:14.291 [2024-11-19 17:21:06.653150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fef50 is same with the state(6) to be set 00:22:14.291 [2024-11-19 17:21:06.653169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fef50 (9): Bad file descriptor 00:22:14.291 [2024-11-19 17:21:06.653185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:14.291 [2024-11-19 17:21:06.653194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:14.291 [2024-11-19 17:21:06.653204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:14.291 [2024-11-19 17:21:06.653431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:14.291 [2024-11-19 17:21:06.653453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:14.291 17:21:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:22:15.225 5897.50 IOPS, 23.04 MiB/s [2024-11-19T17:21:07.758Z] [2024-11-19 17:21:07.653579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.226 [2024-11-19 17:21:07.653660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20fef50 with addr=10.0.0.3, port=4420 00:22:15.226 [2024-11-19 17:21:07.653676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fef50 is same with the state(6) to be set 00:22:15.226 [2024-11-19 17:21:07.653699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fef50 (9): Bad file descriptor 00:22:15.226 [2024-11-19 17:21:07.653717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:15.226 [2024-11-19 17:21:07.653726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:15.226 [2024-11-19 17:21:07.653736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:15.226 [2024-11-19 17:21:07.653748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:15.226 [2024-11-19 17:21:07.653771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:16.160 3931.67 IOPS, 15.36 MiB/s [2024-11-19T17:21:08.693Z] [2024-11-19 17:21:08.653886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:16.160 [2024-11-19 17:21:08.653960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20fef50 with addr=10.0.0.3, port=4420 00:22:16.160 [2024-11-19 17:21:08.653974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fef50 is same with the state(6) to be set 00:22:16.160 [2024-11-19 17:21:08.653994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fef50 (9): Bad file descriptor 00:22:16.160 [2024-11-19 17:21:08.654023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:16.160 [2024-11-19 17:21:08.654035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:16.160 [2024-11-19 17:21:08.654045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:16.160 [2024-11-19 17:21:08.654055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:16.160 [2024-11-19 17:21:08.654065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:17.352 2948.75 IOPS, 11.52 MiB/s [2024-11-19T17:21:09.885Z] [2024-11-19 17:21:09.656838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.352 [2024-11-19 17:21:09.656928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20fef50 with addr=10.0.0.3, port=4420 00:22:17.352 [2024-11-19 17:21:09.656944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fef50 is same with the state(6) to be set 00:22:17.352 [2024-11-19 17:21:09.657175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fef50 (9): Bad file descriptor 00:22:17.352 [2024-11-19 17:21:09.657686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:17.352 [2024-11-19 17:21:09.657724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:17.352 [2024-11-19 17:21:09.657736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:17.352 [2024-11-19 17:21:09.657747] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:17.352 [2024-11-19 17:21:09.657770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:17.352 17:21:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:17.610 [2024-11-19 17:21:09.925581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:17.610 17:21:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 97716 00:22:18.177 2359.00 IOPS, 9.21 MiB/s [2024-11-19T17:21:10.710Z] [2024-11-19 17:21:10.686682] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:22:20.107 3256.00 IOPS, 12.72 MiB/s [2024-11-19T17:21:13.575Z] 4151.57 IOPS, 16.22 MiB/s [2024-11-19T17:21:14.511Z] 4798.38 IOPS, 18.74 MiB/s [2024-11-19T17:21:15.886Z] 5297.78 IOPS, 20.69 MiB/s [2024-11-19T17:21:15.886Z] 5705.70 IOPS, 22.29 MiB/s 00:22:23.353 Latency(us) 00:22:23.353 [2024-11-19T17:21:15.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.353 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:23.353 Verification LBA range: start 0x0 length 0x4000 00:22:23.353 NVMe0n1 : 10.01 5708.72 22.30 4178.92 0.00 12909.49 1482.01 3019898.88 00:22:23.353 [2024-11-19T17:21:15.886Z] =================================================================================================================== 00:22:23.353 [2024-11-19T17:21:15.886Z] Total : 5708.72 22.30 4178.92 0.00 12909.49 0.00 3019898.88 00:22:23.353 { 00:22:23.353 "results": [ 00:22:23.353 { 00:22:23.353 "job": "NVMe0n1", 00:22:23.353 "core_mask": "0x4", 00:22:23.353 "workload": "verify", 00:22:23.353 "status": "finished", 00:22:23.353 "verify_range": { 00:22:23.353 "start": 0, 00:22:23.353 "length": 16384 00:22:23.353 }, 00:22:23.353 "queue_depth": 128, 00:22:23.353 "io_size": 4096, 00:22:23.353 "runtime": 10.005928, 00:22:23.353 "iops": 5708.715873230349, 00:22:23.353 "mibps": 22.29967137980605, 00:22:23.353 "io_failed": 41814, 00:22:23.353 "io_timeout": 0, 00:22:23.353 "avg_latency_us": 12909.493633597816, 00:22:23.353 "min_latency_us": 1482.0072727272727, 00:22:23.353 "max_latency_us": 3019898.88 00:22:23.353 } 00:22:23.353 ], 00:22:23.353 "core_count": 1 00:22:23.353 } 00:22:23.353 17:21:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 97546 00:22:23.353 17:21:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 97546 ']' 00:22:23.353 17:21:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 97546 00:22:23.353 17:21:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:23.353 17:21:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:23.353 17:21:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97546 00:22:23.353 killing process with pid 97546 00:22:23.353 Received shutdown signal, test time was about 10.000000 seconds 00:22:23.353 00:22:23.353 Latency(us) 00:22:23.353 [2024-11-19T17:21:15.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.353 [2024-11-19T17:21:15.886Z] =================================================================================================================== 00:22:23.353 [2024-11-19T17:21:15.886Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:23.353 17:21:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:23.353 17:21:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:23.353 17:21:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97546' 00:22:23.353 17:21:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 97546 00:22:23.353 17:21:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 97546 00:22:23.353 17:21:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=97838 00:22:23.353 17:21:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:22:23.353 17:21:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 97838 /var/tmp/bdevperf.sock 00:22:23.353 17:21:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 97838 ']' 00:22:23.353 17:21:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.353 17:21:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.353 17:21:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.353 17:21:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.353 17:21:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:23.353 [2024-11-19 17:21:15.834797] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:22:23.353 [2024-11-19 17:21:15.835177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97838 ] 00:22:23.612 [2024-11-19 17:21:15.977505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.612 [2024-11-19 17:21:16.031355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.547 17:21:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:24.547 17:21:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:24.547 17:21:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97838 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:22:24.547 17:21:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=97866 00:22:24.547 17:21:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:22:24.806 17:21:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:25.065 NVMe0n1 00:22:25.065 17:21:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:25.065 17:21:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=97920 00:22:25.065 17:21:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:22:25.065 Running I/O for 10 seconds... 00:22:25.999 17:21:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:26.262 20086.00 IOPS, 78.46 MiB/s [2024-11-19T17:21:18.795Z] [2024-11-19 17:21:18.745397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.745987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.746002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.746009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.262 [2024-11-19 17:21:18.746017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.746394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4be0 is same with the state(6) to be set 00:22:26.263 [2024-11-19 17:21:18.748515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.263 [2024-11-19 17:21:18.748571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.263 [2024-11-19 17:21:18.748606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.263 [2024-11-19 17:21:18.748618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.263 [2024-11-19 17:21:18.748630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:116424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.263 [2024-11-19 17:21:18.748640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.263 [2024-11-19 17:21:18.748651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.263 [2024-11-19 17:21:18.748660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.263 [2024-11-19 17:21:18.748672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:118248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.263 [2024-11-19 17:21:18.748680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.263 [2024-11-19 17:21:18.748691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.263 [2024-11-19 17:21:18.748700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.263 [2024-11-19 17:21:18.748711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:88848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.263 [2024-11-19 17:21:18.748720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.263 [2024-11-19 17:21:18.748731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.263 [2024-11-19 17:21:18.748740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.263 [2024-11-19 17:21:18.748751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:116920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.263 [2024-11-19 17:21:18.748760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.263 [2024-11-19 17:21:18.748782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.263 [2024-11-19 17:21:18.748793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.263 [2024-11-19 17:21:18.748920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.263 [2024-11-19 17:21:18.749066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.263 [2024-11-19 17:21:18.749445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.263 [2024-11-19 17:21:18.749467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.263 [2024-11-19 17:21:18.749480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.263 [2024-11-19 17:21:18.749490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.749501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.749510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.749521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.749531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.749542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:68960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.749551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.749561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.749571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.749581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:116680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.749590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.749985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.750007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.750020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.750029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.750041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.750050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.750061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.750071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.750082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.750091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.750102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.750111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.750122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.750131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.750444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:57920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.750457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.750468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.750477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.750488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:116944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.750498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.750509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.750518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.750529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.750537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.750548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.750690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.750972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.751077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.751093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.751103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.751115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.751124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.751135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.751144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.751155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.751164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.751174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:53616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.751463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.751487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:85864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.751498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.751510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.751519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.751530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.751539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.751550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.751559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.751570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.751578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.751674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:66872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.751689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.751700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.751709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.751720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:118280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.752002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.752152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.752264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.752287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.752298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.752403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.752421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.264 [2024-11-19 17:21:18.752433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.264 [2024-11-19 17:21:18.752442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.752567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.752587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.752685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.752701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.752714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.752723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.752734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.752857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.752871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:118544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.752881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.752989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.753001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.753012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.753021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.753153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.753165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.753176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:55800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.753311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.753324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:91096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.753435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.753451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:104504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.753460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.753471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.753595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.753609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.753619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.753631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.753767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.753782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:56984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.753884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.753900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.753909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.753921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.753930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.754058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:54752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.754069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.754080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.754090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.754214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.754224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.754235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:88872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.754334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.754348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.754357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.754367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:69056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.754376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.754495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:115024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.754508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.754519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:119368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.754623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.754638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.754648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.754659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.754784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.754804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:88384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.754813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.754824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.754833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.754954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:52600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.754965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.754976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.755095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.755117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.755250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.755363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:121088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.755383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.755395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.755405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.755523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.755539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.755549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.755558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.755570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.755688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.755702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.755711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.755858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-11-19 17:21:18.755974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-11-19 17:21:18.755989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-11-19 17:21:18.755999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-11-19 17:21:18.756010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-11-19 17:21:18.756019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-11-19 17:21:18.756030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-11-19 17:21:18.756039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-11-19 17:21:18.756049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-11-19 17:21:18.756058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-11-19 17:21:18.756077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:85560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-11-19 17:21:18.756087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-11-19 17:21:18.756161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-11-19 17:21:18.756172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-11-19 17:21:18.756183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-11-19 17:21:18.756192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-11-19 17:21:18.756202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-11-19 17:21:18.756339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-11-19 17:21:18.756357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-11-19 17:21:18.756367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-11-19 17:21:18.756378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-11-19 17:21:18.756607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-11-19 17:21:18.756650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.266 [2024-11-19 17:21:18.756663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108848 len:8 PRP1 0x0 PRP2 0x0 00:22:26.266 [2024-11-19 17:21:18.756672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-11-19 17:21:18.756685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.266 [2024-11-19 17:21:18.756693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.266 [2024-11-19 17:21:18.756700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77848 len:8 PRP1 0x0 PRP2 0x0 00:22:26.266 [2024-11-19 17:21:18.756709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-11-19 17:21:18.756718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.266 [2024-11-19 17:21:18.756954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.266 [2024-11-19 17:21:18.756965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78200 len:8 PRP1 0x0 PRP2 0x0 00:22:26.266 [2024-11-19 17:21:18.756974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-11-19 17:21:18.756985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.266 [2024-11-19 17:21:18.756992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.266 [2024-11-19 17:21:18.757000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97408 len:8 PRP1 0x0 PRP2 0x0 00:22:26.266 [2024-11-19 17:21:18.757009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-11-19 17:21:18.757018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.266 [2024-11-19 17:21:18.757025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.266 [2024-11-19 17:21:18.757033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85664 len:8 PRP1 0x0 PRP2 0x0 00:22:26.266 [2024-11-19 17:21:18.757042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-11-19 17:21:18.757051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.266 [2024-11-19 17:21:18.757057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.266 [2024-11-19 17:21:18.757294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42224 len:8 PRP1 0x0 PRP2 0x0 00:22:26.266 [2024-11-19 17:21:18.757306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-11-19 17:21:18.757315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.266 [2024-11-19 17:21:18.757322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.266 [2024-11-19 17:21:18.757329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74656 len:8 PRP1 0x0 PRP2 0x0 00:22:26.266 [2024-11-19 17:21:18.757337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-11-19 17:21:18.757450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.266 [2024-11-19 17:21:18.757481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.266 [2024-11-19 17:21:18.757490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106728 len:8 PRP1 0x0 PRP2 0x0 00:22:26.266 [2024-11-19 17:21:18.757607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-11-19 17:21:18.757628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.266 [2024-11-19 17:21:18.757636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.266 [2024-11-19 17:21:18.757761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6256 len:8 PRP1 0x0 PRP2 0x0 00:22:26.266 [2024-11-19 17:21:18.757774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-11-19 17:21:18.757784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.266 [2024-11-19 17:21:18.757894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.266 [2024-11-19 17:21:18.757903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58392 len:8 PRP1 0x0 PRP2 0x0 00:22:26.266 [2024-11-19 17:21:18.757913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-11-19 17:21:18.757923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.266 [2024-11-19 17:21:18.758047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.266 [2024-11-19 17:21:18.758076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126664 len:8 PRP1 0x0 PRP2 0x0 00:22:26.266 [2024-11-19 17:21:18.758084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-11-19 17:21:18.758214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.266 [2024-11-19 17:21:18.758221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.266 [2024-11-19 17:21:18.758353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69408 len:8 PRP1 0x0 PRP2 0x0 00:22:26.266 [2024-11-19 17:21:18.758450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-11-19 17:21:18.758463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.267 [2024-11-19 17:21:18.758472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.267 [2024-11-19 17:21:18.758480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85576 len:8 PRP1 0x0 PRP2 0x0 00:22:26.267 [2024-11-19 17:21:18.758489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-11-19 17:21:18.758498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.267 [2024-11-19 17:21:18.758505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.267 [2024-11-19 17:21:18.758513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6152 len:8 PRP1 0x0 PRP2 0x0 00:22:26.267 [2024-11-19 17:21:18.758522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-11-19 17:21:18.758530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.267 [2024-11-19 17:21:18.758537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.267 [2024-11-19 17:21:18.758545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100696 len:8 PRP1 0x0 PRP2 0x0 00:22:26.267 [2024-11-19 17:21:18.758553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-11-19 17:21:18.758633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.267 [2024-11-19 17:21:18.758643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.267 [2024-11-19 17:21:18.758653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43648 len:8 PRP1 0x0 PRP2 0x0 00:22:26.267 [2024-11-19 17:21:18.758662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-11-19 17:21:18.758671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.267 [2024-11-19 17:21:18.758772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.267 [2024-11-19 17:21:18.758786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88400 len:8 PRP1 0x0 PRP2 0x0 00:22:26.267 [2024-11-19 17:21:18.758796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-11-19 17:21:18.758806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.267 [2024-11-19 17:21:18.758813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.267 [2024-11-19 17:21:18.758934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32968 len:8 PRP1 0x0 PRP2 0x0 00:22:26.267 [2024-11-19 17:21:18.758946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-11-19 17:21:18.759218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.267 [2024-11-19 17:21:18.759235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.267 [2024-11-19 17:21:18.759243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103224 len:8 PRP1 0x0 PRP2 0x0 00:22:26.267 [2024-11-19 17:21:18.759252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-11-19 17:21:18.759348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.267 [2024-11-19 17:21:18.759357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.267 [2024-11-19 17:21:18.759365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5056 len:8 PRP1 0x0 PRP2 0x0 00:22:26.267 [2024-11-19 17:21:18.759373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-11-19 17:21:18.759382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.267 [2024-11-19 17:21:18.759389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.267 [2024-11-19 17:21:18.759524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70688 len:8 PRP1 0x0 PRP2 0x0 00:22:26.267 [2024-11-19 17:21:18.759641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-11-19 17:21:18.759655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.267 [2024-11-19 17:21:18.759662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.267 [2024-11-19 17:21:18.759670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33752 len:8 PRP1 0x0 PRP2 0x0 00:22:26.267 [2024-11-19 17:21:18.759790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-11-19 17:21:18.759801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.267 [2024-11-19 17:21:18.759819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.267 [2024-11-19 17:21:18.759912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17864 len:8 PRP1 0x0 PRP2 0x0 00:22:26.267 [2024-11-19 17:21:18.759923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-11-19 17:21:18.759947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.267 [2024-11-19 17:21:18.759956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.267 [2024-11-19 17:21:18.760166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39208 len:8 PRP1 0x0 PRP2 0x0 00:22:26.267 [2024-11-19 17:21:18.760186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-11-19 17:21:18.760198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.267 [2024-11-19 17:21:18.760205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.267 [2024-11-19 17:21:18.760213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105192 len:8 PRP1 0x0 PRP2 0x0 00:22:26.267 [2024-11-19 17:21:18.760221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-11-19 17:21:18.760231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.267 [2024-11-19 17:21:18.760238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.267 [2024-11-19 17:21:18.760384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48192 len:8 PRP1 0x0 PRP2 0x0 00:22:26.267 [2024-11-19 17:21:18.760523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-11-19 17:21:18.760658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.267 [2024-11-19 17:21:18.760667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.267 [2024-11-19 17:21:18.760929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74728 len:8 PRP1 0x0 PRP2 0x0 00:22:26.267 [2024-11-19 17:21:18.760941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-11-19 17:21:18.760952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.267 [2024-11-19 17:21:18.760959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.267 [2024-11-19 17:21:18.760966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91768 len:8 PRP1 0x0 PRP2 0x0 00:22:26.267 [2024-11-19 17:21:18.760975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-11-19 17:21:18.761101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.267 [2024-11-19 17:21:18.761114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.267 [2024-11-19 17:21:18.761254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80712 len:8 PRP1 0x0 PRP2 0x0 00:22:26.267 [2024-11-19 17:21:18.761353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-11-19 17:21:18.761365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.267 [2024-11-19 17:21:18.761373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.267 [2024-11-19 17:21:18.761381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84624 len:8 PRP1 0x0 PRP2 0x0 00:22:26.267 [2024-11-19 17:21:18.761388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-11-19 17:21:18.761708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.267 [2024-11-19 17:21:18.761734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-11-19 17:21:18.761781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.267 [2024-11-19 17:21:18.761806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-11-19 17:21:18.761823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.267 [2024-11-19 17:21:18.761956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-11-19 17:21:18.762082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.267 [2024-11-19 17:21:18.762233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-11-19 17:21:18.762245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c72f50 is same with the state(6) to be set 00:22:26.267 [2024-11-19 17:21:18.762767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:26.267 [2024-11-19 17:21:18.762819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c72f50 (9): Bad file descriptor 00:22:26.267 [2024-11-19 17:21:18.763121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.267 [2024-11-19 17:21:18.763168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c72f50 with addr=10.0.0.3, port=4420 00:22:26.267 [2024-11-19 17:21:18.763180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c72f50 is same with the state(6) to be set 00:22:26.267 [2024-11-19 17:21:18.763198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c72f50 (9): Bad file descriptor 00:22:26.267 [2024-11-19 17:21:18.763214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:26.268 [2024-11-19 17:21:18.763353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:26.268 [2024-11-19 17:21:18.763483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:26.268 [2024-11-19 17:21:18.763498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:26.268 [2024-11-19 17:21:18.763600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:26.268 17:21:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 97920 00:22:28.138 11994.00 IOPS, 46.85 MiB/s [2024-11-19T17:21:20.930Z] 7996.00 IOPS, 31.23 MiB/s [2024-11-19T17:21:20.930Z] [2024-11-19 17:21:20.763777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:28.397 [2024-11-19 17:21:20.763852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c72f50 with addr=10.0.0.3, port=4420 00:22:28.397 [2024-11-19 17:21:20.763883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c72f50 is same with the state(6) to be set 00:22:28.397 [2024-11-19 17:21:20.763909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c72f50 (9): Bad file descriptor 00:22:28.397 [2024-11-19 17:21:20.763954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:28.397 [2024-11-19 17:21:20.763965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:28.397 [2024-11-19 17:21:20.763975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:28.397 [2024-11-19 17:21:20.763987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:28.397 [2024-11-19 17:21:20.763997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:30.268 5997.00 IOPS, 23.43 MiB/s [2024-11-19T17:21:22.801Z] 4797.60 IOPS, 18.74 MiB/s [2024-11-19T17:21:22.801Z] [2024-11-19 17:21:22.764175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.268 [2024-11-19 17:21:22.764229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c72f50 with addr=10.0.0.3, port=4420 00:22:30.268 [2024-11-19 17:21:22.764245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c72f50 is same with the state(6) to be set 00:22:30.268 [2024-11-19 17:21:22.764268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c72f50 (9): Bad file descriptor 00:22:30.268 [2024-11-19 17:21:22.764288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:30.268 [2024-11-19 17:21:22.764298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:30.268 [2024-11-19 17:21:22.764309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:30.268 [2024-11-19 17:21:22.764321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:30.268 [2024-11-19 17:21:22.764331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:32.139 3998.00 IOPS, 15.62 MiB/s [2024-11-19T17:21:24.930Z] 3426.86 IOPS, 13.39 MiB/s [2024-11-19T17:21:24.930Z] [2024-11-19 17:21:24.764398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:32.397 [2024-11-19 17:21:24.764458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:32.397 [2024-11-19 17:21:24.764485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:32.397 [2024-11-19 17:21:24.764495] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:22:32.397 [2024-11-19 17:21:24.764507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:33.330 2998.50 IOPS, 11.71 MiB/s 00:22:33.330 Latency(us) 00:22:33.330 [2024-11-19T17:21:25.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.330 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:22:33.331 NVMe0n1 : 8.21 2921.81 11.41 15.59 0.00 43644.34 2740.60 7046430.72 00:22:33.331 [2024-11-19T17:21:25.864Z] =================================================================================================================== 00:22:33.331 [2024-11-19T17:21:25.864Z] Total : 2921.81 11.41 15.59 0.00 43644.34 2740.60 7046430.72 00:22:33.331 { 00:22:33.331 "results": [ 00:22:33.331 { 00:22:33.331 "job": "NVMe0n1", 00:22:33.331 "core_mask": "0x4", 00:22:33.331 "workload": "randread", 00:22:33.331 "status": "finished", 00:22:33.331 "queue_depth": 128, 00:22:33.331 "io_size": 4096, 00:22:33.331 "runtime": 8.209967, 00:22:33.331 "iops": 2921.81442385822, 00:22:33.331 "mibps": 11.413337593196172, 00:22:33.331 "io_failed": 128, 00:22:33.331 "io_timeout": 0, 00:22:33.331 "avg_latency_us": 43644.34036460139, 00:22:33.331 "min_latency_us": 2740.5963636363635, 00:22:33.331 "max_latency_us": 7046430.72 00:22:33.331 } 00:22:33.331 ], 00:22:33.331 "core_count": 1 00:22:33.331 } 00:22:33.331 17:21:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:33.331 Attaching 5 probes... 00:22:33.331 1338.376157: reset bdev controller NVMe0 00:22:33.331 1338.504663: reconnect bdev controller NVMe0 00:22:33.331 3339.282942: reconnect delay bdev controller NVMe0 00:22:33.331 3339.303529: reconnect bdev controller NVMe0 00:22:33.331 5339.673103: reconnect delay bdev controller NVMe0 00:22:33.331 5339.690160: reconnect bdev controller NVMe0 00:22:33.331 7340.007128: reconnect delay bdev controller NVMe0 00:22:33.331 7340.039856: reconnect bdev controller NVMe0 00:22:33.331 17:21:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:22:33.331 17:21:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:22:33.331 17:21:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 97866 00:22:33.331 17:21:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:33.331 17:21:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 97838 00:22:33.331 17:21:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 97838 ']' 00:22:33.331 17:21:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 97838 00:22:33.331 17:21:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:33.331 17:21:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:33.331 17:21:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97838 00:22:33.331 killing process with pid 97838 00:22:33.331 Received shutdown signal, test time was about 8.283687 seconds 00:22:33.331 00:22:33.331 Latency(us) 00:22:33.331 [2024-11-19T17:21:25.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.331 [2024-11-19T17:21:25.864Z] =================================================================================================================== 00:22:33.331 [2024-11-19T17:21:25.864Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:33.331 17:21:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:33.331 17:21:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:33.331 17:21:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97838' 00:22:33.331 17:21:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 97838 00:22:33.331 17:21:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 97838 00:22:33.589 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:33.848 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:22:33.848 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:22:33.848 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:33.848 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:22:33.848 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:33.848 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:22:33.848 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:33.848 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:33.848 rmmod nvme_tcp 00:22:33.848 rmmod nvme_fabrics 00:22:33.848 rmmod nvme_keyring 00:22:33.848 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:33.848 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:22:33.848 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:22:33.848 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 97271 ']' 00:22:33.848 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 97271 00:22:33.848 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 97271 ']' 00:22:33.848 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 97271 00:22:33.848 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:33.848 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:33.848 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97271 00:22:34.107 killing process with pid 97271 00:22:34.107 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:34.107 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:34.107 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97271' 00:22:34.107 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 97271 00:22:34.107 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 97271 00:22:34.366 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:34.366 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:34.366 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:34.366 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:22:34.366 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:22:34.366 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:34.366 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:22:34.366 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:34.366 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:34.366 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:34.366 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:34.366 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:34.366 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:34.366 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:34.366 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:34.366 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:34.366 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:34.366 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:34.366 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:34.366 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:34.367 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:34.367 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:34.367 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:34.367 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.367 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.367 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.367 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:22:34.367 00:22:34.367 real 0m46.903s 00:22:34.367 user 2m16.859s 00:22:34.367 sys 0m5.137s 00:22:34.367 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:34.367 17:21:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:34.367 ************************************ 00:22:34.367 END TEST nvmf_timeout 00:22:34.367 ************************************ 00:22:34.626 17:21:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:22:34.626 17:21:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:22:34.626 00:22:34.626 real 5m42.638s 00:22:34.626 user 14m40.708s 00:22:34.626 sys 1m7.963s 00:22:34.626 ************************************ 00:22:34.626 END TEST nvmf_host 00:22:34.626 ************************************ 00:22:34.626 17:21:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:34.626 17:21:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.626 17:21:26 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:22:34.626 17:21:26 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:22:34.626 17:21:26 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:22:34.626 17:21:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:34.626 17:21:26 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:34.626 17:21:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:34.626 ************************************ 00:22:34.626 START TEST nvmf_target_core_interrupt_mode 00:22:34.626 ************************************ 00:22:34.626 17:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:22:34.626 * Looking for test storage... 00:22:34.626 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:22:34.626 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:34.626 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:22:34.626 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:22:34.886 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:34.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.887 --rc genhtml_branch_coverage=1 00:22:34.887 --rc genhtml_function_coverage=1 00:22:34.887 --rc genhtml_legend=1 00:22:34.887 --rc geninfo_all_blocks=1 00:22:34.887 --rc geninfo_unexecuted_blocks=1 00:22:34.887 00:22:34.887 ' 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:34.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.887 --rc genhtml_branch_coverage=1 00:22:34.887 --rc genhtml_function_coverage=1 00:22:34.887 --rc genhtml_legend=1 00:22:34.887 --rc geninfo_all_blocks=1 00:22:34.887 --rc geninfo_unexecuted_blocks=1 00:22:34.887 00:22:34.887 ' 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:34.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.887 --rc genhtml_branch_coverage=1 00:22:34.887 --rc genhtml_function_coverage=1 00:22:34.887 --rc genhtml_legend=1 00:22:34.887 --rc geninfo_all_blocks=1 00:22:34.887 --rc geninfo_unexecuted_blocks=1 00:22:34.887 00:22:34.887 ' 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:34.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.887 --rc genhtml_branch_coverage=1 00:22:34.887 --rc genhtml_function_coverage=1 00:22:34.887 --rc genhtml_legend=1 00:22:34.887 --rc geninfo_all_blocks=1 00:22:34.887 --rc geninfo_unexecuted_blocks=1 00:22:34.887 00:22:34.887 ' 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:22:34.887 ************************************ 00:22:34.887 START TEST nvmf_abort 00:22:34.887 ************************************ 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:22:34.887 * Looking for test storage... 00:22:34.887 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:34.887 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:34.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.888 --rc genhtml_branch_coverage=1 00:22:34.888 --rc genhtml_function_coverage=1 00:22:34.888 --rc genhtml_legend=1 00:22:34.888 --rc geninfo_all_blocks=1 00:22:34.888 --rc geninfo_unexecuted_blocks=1 00:22:34.888 00:22:34.888 ' 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:34.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.888 --rc genhtml_branch_coverage=1 00:22:34.888 --rc genhtml_function_coverage=1 00:22:34.888 --rc genhtml_legend=1 00:22:34.888 --rc geninfo_all_blocks=1 00:22:34.888 --rc geninfo_unexecuted_blocks=1 00:22:34.888 00:22:34.888 ' 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:34.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.888 --rc genhtml_branch_coverage=1 00:22:34.888 --rc genhtml_function_coverage=1 00:22:34.888 --rc genhtml_legend=1 00:22:34.888 --rc geninfo_all_blocks=1 00:22:34.888 --rc geninfo_unexecuted_blocks=1 00:22:34.888 00:22:34.888 ' 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:34.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.888 --rc genhtml_branch_coverage=1 00:22:34.888 --rc genhtml_function_coverage=1 00:22:34.888 --rc genhtml_legend=1 00:22:34.888 --rc geninfo_all_blocks=1 00:22:34.888 --rc geninfo_unexecuted_blocks=1 00:22:34.888 00:22:34.888 ' 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:22:34.888 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:35.146 Cannot find device "nvmf_init_br" 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # true 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:35.146 Cannot find device "nvmf_init_br2" 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # true 00:22:35.146 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:35.146 Cannot find device "nvmf_tgt_br" 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # true 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:35.147 Cannot find device "nvmf_tgt_br2" 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # true 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:35.147 Cannot find device "nvmf_init_br" 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # true 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:35.147 Cannot find device "nvmf_init_br2" 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # true 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:35.147 Cannot find device "nvmf_tgt_br" 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # true 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:35.147 Cannot find device "nvmf_tgt_br2" 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # true 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:35.147 Cannot find device "nvmf_br" 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # true 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:35.147 Cannot find device "nvmf_init_if" 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # true 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:35.147 Cannot find device "nvmf_init_if2" 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # true 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:35.147 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # true 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:35.147 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # true 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:35.147 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:35.404 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:35.404 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:35.404 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:35.404 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:35.404 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:35.404 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:35.404 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:35.404 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:35.404 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:35.404 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:35.404 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:35.404 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:35.404 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:35.405 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:35.405 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:35.405 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:35.405 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:35.405 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:22:35.405 00:22:35.405 --- 10.0.0.3 ping statistics --- 00:22:35.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.405 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:22:35.405 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:35.405 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:35.405 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:22:35.405 00:22:35.405 --- 10.0.0.4 ping statistics --- 00:22:35.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.405 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:22:35.405 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:35.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:35.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:22:35.405 00:22:35.405 --- 10.0.0.1 ping statistics --- 00:22:35.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.405 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:22:35.405 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:35.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:35.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:22:35.405 00:22:35.405 --- 10.0.0.2 ping statistics --- 00:22:35.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.405 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:22:35.405 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:35.405 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:22:35.405 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:35.405 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:35.405 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:35.405 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:35.405 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:35.405 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:35.405 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:35.405 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:22:35.405 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:35.405 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:35.405 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:22:35.405 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=98339 00:22:35.405 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:22:35.405 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 98339 00:22:35.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.405 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 98339 ']' 00:22:35.405 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.405 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:35.405 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.405 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:35.405 17:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:22:35.405 [2024-11-19 17:21:27.883728] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:22:35.405 [2024-11-19 17:21:27.885032] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:22:35.405 [2024-11-19 17:21:27.885103] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.663 [2024-11-19 17:21:28.031882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:35.663 [2024-11-19 17:21:28.084360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.663 [2024-11-19 17:21:28.084712] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.663 [2024-11-19 17:21:28.085087] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:35.663 [2024-11-19 17:21:28.085339] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:35.663 [2024-11-19 17:21:28.085447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.663 [2024-11-19 17:21:28.086833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.663 [2024-11-19 17:21:28.086987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:35.663 [2024-11-19 17:21:28.086993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.922 [2024-11-19 17:21:28.206174] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:22:35.922 [2024-11-19 17:21:28.206885] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:22:35.922 [2024-11-19 17:21:28.206208] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:22:35.922 [2024-11-19 17:21:28.206526] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:22:35.922 [2024-11-19 17:21:28.300884] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:22:35.922 Malloc0 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:22:35.922 Delay0 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:22:35.922 [2024-11-19 17:21:28.389050] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.922 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:22:35.923 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.923 17:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:22:36.181 [2024-11-19 17:21:28.580761] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:22:38.081 Initializing NVMe Controllers 00:22:38.081 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:22:38.081 controller IO queue size 128 less than required 00:22:38.081 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:22:38.081 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:22:38.081 Initialization complete. Launching workers. 00:22:38.081 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31978 00:22:38.081 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32035, failed to submit 66 00:22:38.081 success 31978, unsuccessful 57, failed 0 00:22:38.081 17:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:38.081 17:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.081 17:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:22:38.339 17:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.339 17:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:22:38.339 17:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:22:38.340 17:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:38.340 17:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:22:38.340 17:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:38.340 17:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:22:38.340 17:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:38.340 17:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:38.340 rmmod nvme_tcp 00:22:38.340 rmmod nvme_fabrics 00:22:38.340 rmmod nvme_keyring 00:22:38.340 17:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:38.340 17:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:22:38.340 17:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:22:38.340 17:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 98339 ']' 00:22:38.340 17:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 98339 00:22:38.340 17:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 98339 ']' 00:22:38.340 17:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 98339 00:22:38.340 17:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:22:38.340 17:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:38.340 17:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98339 00:22:38.340 17:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:38.340 17:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:38.340 killing process with pid 98339 00:22:38.340 17:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98339' 00:22:38.340 17:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 98339 00:22:38.340 17:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 98339 00:22:38.598 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:38.598 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:38.598 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:38.598 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:22:38.598 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:22:38.598 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:38.598 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:22:38.598 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:38.598 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:38.598 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:38.598 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:38.598 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:38.598 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:38.598 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:38.598 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:38.857 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:38.857 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:38.857 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:38.857 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:38.857 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:38.858 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:38.858 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:38.858 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:38.858 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.858 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.858 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.858 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:22:38.858 00:22:38.858 real 0m4.100s 00:22:38.858 user 0m9.193s 00:22:38.858 sys 0m1.372s 00:22:38.858 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:38.858 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:22:38.858 ************************************ 00:22:38.858 END TEST nvmf_abort 00:22:38.858 ************************************ 00:22:38.858 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:22:38.858 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:38.858 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:38.858 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:22:38.858 ************************************ 00:22:38.858 START TEST nvmf_ns_hotplug_stress 00:22:38.858 ************************************ 00:22:38.858 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:22:39.117 * Looking for test storage... 00:22:39.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:39.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.117 --rc genhtml_branch_coverage=1 00:22:39.117 --rc genhtml_function_coverage=1 00:22:39.117 --rc genhtml_legend=1 00:22:39.117 --rc geninfo_all_blocks=1 00:22:39.117 --rc geninfo_unexecuted_blocks=1 00:22:39.117 00:22:39.117 ' 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:39.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.117 --rc genhtml_branch_coverage=1 00:22:39.117 --rc genhtml_function_coverage=1 00:22:39.117 --rc genhtml_legend=1 00:22:39.117 --rc geninfo_all_blocks=1 00:22:39.117 --rc geninfo_unexecuted_blocks=1 00:22:39.117 00:22:39.117 ' 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:39.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.117 --rc genhtml_branch_coverage=1 00:22:39.117 --rc genhtml_function_coverage=1 00:22:39.117 --rc genhtml_legend=1 00:22:39.117 --rc geninfo_all_blocks=1 00:22:39.117 --rc geninfo_unexecuted_blocks=1 00:22:39.117 00:22:39.117 ' 00:22:39.117 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:39.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.117 --rc genhtml_branch_coverage=1 00:22:39.117 --rc genhtml_function_coverage=1 00:22:39.117 --rc genhtml_legend=1 00:22:39.118 --rc geninfo_all_blocks=1 00:22:39.118 --rc geninfo_unexecuted_blocks=1 00:22:39.118 00:22:39.118 ' 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:39.118 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:39.119 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:39.119 Cannot find device "nvmf_init_br" 00:22:39.119 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:22:39.119 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:39.119 Cannot find device "nvmf_init_br2" 00:22:39.119 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:22:39.119 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:39.119 Cannot find device "nvmf_tgt_br" 00:22:39.119 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:22:39.119 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:39.119 Cannot find device "nvmf_tgt_br2" 00:22:39.119 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:22:39.119 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:39.119 Cannot find device "nvmf_init_br" 00:22:39.119 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:22:39.119 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:39.377 Cannot find device "nvmf_init_br2" 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:39.377 Cannot find device "nvmf_tgt_br" 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:39.377 Cannot find device "nvmf_tgt_br2" 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:39.377 Cannot find device "nvmf_br" 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:39.377 Cannot find device "nvmf_init_if" 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:39.377 Cannot find device "nvmf_init_if2" 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:39.377 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:39.377 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:39.377 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:39.636 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:39.636 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:39.636 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:39.636 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:39.636 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:39.636 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:39.636 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:39.636 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:39.636 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:39.636 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:39.636 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:39.636 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:39.636 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:39.636 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:39.636 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:39.636 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:22:39.636 00:22:39.636 --- 10.0.0.3 ping statistics --- 00:22:39.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.636 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:22:39.636 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:39.636 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:39.636 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.078 ms 00:22:39.636 00:22:39.636 --- 10.0.0.4 ping statistics --- 00:22:39.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.636 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:22:39.636 17:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:39.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:22:39.636 00:22:39.636 --- 10.0.0.1 ping statistics --- 00:22:39.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.636 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:22:39.636 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:39.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:22:39.636 00:22:39.636 --- 10.0.0.2 ping statistics --- 00:22:39.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.636 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:22:39.636 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.636 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:22:39.636 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:39.636 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:39.636 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:39.636 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:39.636 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:39.636 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:39.636 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:39.636 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:22:39.636 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:39.636 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:39.636 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:22:39.636 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=98612 00:22:39.636 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:22:39.636 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 98612 00:22:39.636 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 98612 ']' 00:22:39.636 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.636 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.636 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.636 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.636 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:22:39.636 [2024-11-19 17:21:32.079639] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:22:39.636 [2024-11-19 17:21:32.080587] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:22:39.636 [2024-11-19 17:21:32.080643] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.894 [2024-11-19 17:21:32.221168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:39.894 [2024-11-19 17:21:32.267404] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.894 [2024-11-19 17:21:32.267478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.894 [2024-11-19 17:21:32.267489] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.894 [2024-11-19 17:21:32.267497] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.894 [2024-11-19 17:21:32.267504] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.894 [2024-11-19 17:21:32.268873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.894 [2024-11-19 17:21:32.269030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.894 [2024-11-19 17:21:32.269030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:39.894 [2024-11-19 17:21:32.384943] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:22:39.894 [2024-11-19 17:21:32.385116] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:22:39.894 [2024-11-19 17:21:32.385548] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:22:39.894 [2024-11-19 17:21:32.385681] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:22:39.894 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:39.894 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:22:39.894 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:39.894 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:39.895 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:22:40.153 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.153 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:22:40.153 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:40.411 [2024-11-19 17:21:32.746042] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.411 17:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:22:40.669 17:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:40.927 [2024-11-19 17:21:33.338489] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:40.928 17:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:22:41.186 17:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:22:41.443 Malloc0 00:22:41.443 17:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:22:41.701 Delay0 00:22:41.701 17:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:41.959 17:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:22:42.217 NULL1 00:22:42.217 17:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:22:42.475 17:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:22:42.475 17:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=98730 00:22:42.475 17:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:22:42.475 17:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:43.848 Read completed with error (sct=0, sc=11) 00:22:43.848 17:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:43.848 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:43.848 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:43.848 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:43.848 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:43.848 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:43.848 17:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:22:43.848 17:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:22:44.106 true 00:22:44.106 17:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:22:44.106 17:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:45.042 17:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:45.042 17:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:22:45.042 17:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:22:45.300 true 00:22:45.300 17:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:22:45.300 17:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:45.559 17:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:45.817 17:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:22:45.817 17:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:22:46.076 true 00:22:46.076 17:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:22:46.076 17:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:47.013 17:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:47.270 17:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:22:47.270 17:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:22:47.270 true 00:22:47.270 17:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:22:47.271 17:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:47.529 17:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:47.787 17:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:22:47.787 17:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:22:48.046 true 00:22:48.046 17:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:22:48.046 17:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:48.980 17:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:49.239 17:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:22:49.239 17:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:22:49.498 true 00:22:49.498 17:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:22:49.498 17:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:49.756 17:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:50.015 17:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:22:50.015 17:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:22:50.273 true 00:22:50.273 17:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:22:50.273 17:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:50.273 17:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:50.532 17:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:22:50.532 17:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:22:50.790 true 00:22:50.790 17:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:22:50.790 17:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:52.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:52.168 17:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:52.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:52.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:52.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:52.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:52.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:52.168 17:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:22:52.168 17:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:22:52.427 true 00:22:52.427 17:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:22:52.427 17:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:53.362 17:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:53.620 17:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:22:53.620 17:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:22:53.890 true 00:22:53.890 17:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:22:53.890 17:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:53.890 17:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:54.469 17:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:22:54.469 17:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:22:54.469 true 00:22:54.469 17:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:22:54.469 17:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:54.728 17:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:54.987 17:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:22:54.987 17:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:22:55.246 true 00:22:55.246 17:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:22:55.246 17:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:56.180 17:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:56.749 17:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:22:56.749 17:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:22:56.749 true 00:22:56.749 17:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:22:56.749 17:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:57.008 17:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:57.267 17:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:22:57.267 17:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:22:57.526 true 00:22:57.526 17:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:22:57.526 17:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:57.785 17:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:58.044 17:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:22:58.044 17:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:22:58.302 true 00:22:58.302 17:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:22:58.302 17:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:59.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:59.236 17:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:59.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:59.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:59.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:59.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:59.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:59.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:59.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:59.495 17:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:22:59.495 17:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:22:59.753 true 00:22:59.753 17:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:22:59.753 17:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:00.689 17:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:00.689 17:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:23:00.689 17:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:23:00.948 true 00:23:00.948 17:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:23:00.948 17:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:01.207 17:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:01.465 17:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:23:01.465 17:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:23:01.723 true 00:23:01.723 17:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:23:01.723 17:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:02.660 17:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:02.919 17:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:23:02.919 17:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:23:03.177 true 00:23:03.177 17:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:23:03.177 17:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:03.436 17:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:03.694 17:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:23:03.694 17:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:23:03.953 true 00:23:03.953 17:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:23:03.953 17:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:04.211 17:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:04.469 17:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:23:04.469 17:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:23:04.728 true 00:23:04.728 17:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:23:04.728 17:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:05.663 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:05.922 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:23:05.922 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:23:06.181 true 00:23:06.181 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:23:06.181 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:06.440 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:06.440 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:23:06.440 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:23:06.699 true 00:23:06.699 17:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:23:06.699 17:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:07.633 17:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:07.891 17:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:23:07.891 17:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:23:08.149 true 00:23:08.149 17:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:23:08.149 17:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:08.408 17:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:08.666 17:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:23:08.666 17:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:23:08.925 true 00:23:08.925 17:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:23:08.925 17:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:09.860 17:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:09.860 17:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:23:09.860 17:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:23:10.119 true 00:23:10.119 17:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:23:10.119 17:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:10.378 17:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:10.636 17:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:23:10.636 17:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:23:10.894 true 00:23:10.894 17:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:23:10.894 17:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:11.827 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:11.827 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:23:11.827 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:23:12.085 true 00:23:12.085 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:23:12.085 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:12.343 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:12.601 Initializing NVMe Controllers 00:23:12.601 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:12.601 Controller IO queue size 128, less than required. 00:23:12.601 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:12.601 Controller IO queue size 128, less than required. 00:23:12.601 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:12.601 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:12.601 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:12.601 Initialization complete. Launching workers. 00:23:12.601 ======================================================== 00:23:12.601 Latency(us) 00:23:12.601 Device Information : IOPS MiB/s Average min max 00:23:12.601 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 726.02 0.35 91591.08 2100.60 1038613.68 00:23:12.601 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11886.02 5.80 10768.47 2797.41 477065.90 00:23:12.602 ======================================================== 00:23:12.602 Total : 12612.04 6.16 15421.10 2100.60 1038613.68 00:23:12.602 00:23:12.602 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:23:12.602 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:23:12.860 true 00:23:12.860 17:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98730 00:23:12.860 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (98730) - No such process 00:23:12.860 17:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 98730 00:23:12.860 17:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:13.118 17:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:13.377 17:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:23:13.377 17:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:23:13.377 17:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:23:13.377 17:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:13.377 17:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:23:13.377 null0 00:23:13.635 17:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:13.635 17:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:13.635 17:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:23:13.635 null1 00:23:13.635 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:13.635 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:13.635 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:23:13.893 null2 00:23:13.893 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:13.893 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:13.893 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:23:14.152 null3 00:23:14.152 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:14.152 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:14.152 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:23:14.410 null4 00:23:14.410 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:14.410 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:14.410 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:23:14.668 null5 00:23:14.668 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:14.668 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:14.668 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:23:14.668 null6 00:23:14.668 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:14.668 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:14.668 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:23:14.928 null7 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 99730 99732 99733 99735 99737 99739 99741 99743 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:14.928 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:15.186 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:15.186 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:15.186 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:15.186 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:15.186 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:15.186 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:15.186 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:15.445 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:15.445 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:15.445 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:15.445 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:15.445 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:15.445 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:15.445 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:15.445 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:15.445 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:15.445 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:15.445 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:15.445 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:15.445 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:15.445 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:15.445 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:15.445 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:15.445 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:15.445 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:15.445 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:15.703 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:15.703 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:15.704 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:15.704 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:15.704 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:15.704 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:15.704 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:15.704 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:15.704 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:15.704 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:15.704 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:15.962 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:15.962 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:15.962 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:15.962 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:15.962 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:15.962 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:15.962 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:15.962 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:15.962 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:15.962 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:15.962 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:15.962 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:16.221 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:16.221 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:16.221 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:16.221 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:16.221 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:16.221 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:16.221 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:16.221 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:16.221 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:16.221 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:16.221 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:16.221 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:16.221 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:16.221 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:16.221 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:16.221 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:16.221 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:16.221 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:16.479 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:16.479 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:16.479 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:16.479 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:16.479 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:16.479 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:16.479 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:16.479 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:16.479 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:16.479 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:16.479 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:16.479 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:16.479 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:16.479 17:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:16.738 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:16.738 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:16.738 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:16.738 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:16.738 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:16.738 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:16.738 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:16.738 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:16.738 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:16.738 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:16.738 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:16.738 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:16.738 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:16.738 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:16.738 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:16.738 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:16.738 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:16.996 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:16.996 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:16.996 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:16.996 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:16.996 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:16.996 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:16.996 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:16.996 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:16.996 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:17.255 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:17.255 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:17.255 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:17.255 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:17.255 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:17.255 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:17.255 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:17.255 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:17.255 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:17.255 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:17.255 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:17.255 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:17.255 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:17.255 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:17.255 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:17.255 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:17.255 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:17.255 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:17.255 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:17.513 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:17.513 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:17.513 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:17.513 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:17.513 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:17.513 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:17.513 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:17.513 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:17.513 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:17.513 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:17.513 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:17.513 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:17.772 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:17.772 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:17.772 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:17.772 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:17.772 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:17.772 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:17.772 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:17.772 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:17.772 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:17.772 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:17.772 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:17.772 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:17.772 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:17.772 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:17.772 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:17.772 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:17.772 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:17.772 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:18.030 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:18.030 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:18.030 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:18.030 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:18.030 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:18.030 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:18.030 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:18.030 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:18.030 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:18.030 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:18.030 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:18.030 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:18.030 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:18.031 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:18.031 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:18.293 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:18.293 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:18.293 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:18.293 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:18.293 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:18.293 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:18.293 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:18.293 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:18.293 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:18.293 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:18.293 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:18.293 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:18.293 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:18.580 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:18.580 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:18.580 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:18.580 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:18.580 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:18.580 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:18.580 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:18.580 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:18.580 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:18.580 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:18.580 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:18.580 17:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:18.580 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:18.580 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:18.580 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:18.580 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:18.851 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:18.851 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:18.851 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:18.851 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:18.851 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:18.851 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:18.851 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:18.851 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:18.851 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:18.851 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:18.851 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:18.851 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:18.851 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:18.851 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:18.851 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:18.851 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:19.110 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:19.111 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:19.111 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:19.111 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:19.111 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:19.111 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:19.111 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:19.111 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:19.111 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:19.111 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:19.111 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:19.111 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:19.111 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:19.111 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:19.111 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:19.111 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:19.370 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:19.370 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:19.370 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:19.370 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:19.370 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:19.370 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:19.370 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:19.370 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:19.370 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:19.370 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:19.370 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:19.370 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:19.370 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:19.370 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:19.370 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:19.370 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:19.370 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:19.370 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:19.370 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:19.630 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:19.630 17:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:19.630 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:19.630 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:19.630 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:19.630 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:19.630 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:19.630 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:19.630 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:19.630 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:19.889 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:19.889 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:19.889 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:19.889 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:19.889 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:19.889 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:19.889 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:19.889 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:19.889 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:19.889 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:19.889 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:19.889 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:19.889 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:19.889 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:19.889 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:19.889 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:19.889 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:19.889 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:20.149 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:20.149 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:20.149 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:20.149 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:20.149 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:20.149 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:20.149 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:20.149 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:20.149 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:20.149 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:20.149 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:20.149 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:20.149 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:20.149 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:20.409 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:20.409 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:20.409 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:20.409 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:20.409 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:20.409 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:20.409 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:20.409 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:20.409 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:20.409 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:20.409 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:20.409 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:20.409 17:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:20.668 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:20.668 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:20.668 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:20.668 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:20.668 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:20.668 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:20.668 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:23:20.668 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:23:20.668 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:20.668 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:23:20.669 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:20.669 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:23:20.669 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:20.669 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:20.669 rmmod nvme_tcp 00:23:20.927 rmmod nvme_fabrics 00:23:20.927 rmmod nvme_keyring 00:23:20.927 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:20.927 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:23:20.927 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:23:20.927 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 98612 ']' 00:23:20.927 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 98612 00:23:20.927 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 98612 ']' 00:23:20.927 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 98612 00:23:20.927 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:23:20.927 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.927 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98612 00:23:20.927 killing process with pid 98612 00:23:20.927 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:20.927 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:20.927 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98612' 00:23:20.927 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 98612 00:23:20.927 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 98612 00:23:21.186 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:21.186 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:21.186 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:21.186 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:23:21.186 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:23:21.186 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:23:21.186 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:21.186 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:21.186 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:21.186 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:21.186 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:21.186 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:21.186 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:21.186 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:21.186 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:21.186 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:21.186 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:21.186 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:21.186 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:21.186 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:21.186 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:21.445 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:21.445 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:21.445 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.445 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:21.445 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.445 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:23:21.445 00:23:21.445 real 0m42.405s 00:23:21.445 user 3m9.404s 00:23:21.445 sys 0m16.188s 00:23:21.445 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:21.445 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:23:21.445 ************************************ 00:23:21.445 END TEST nvmf_ns_hotplug_stress 00:23:21.445 ************************************ 00:23:21.445 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:23:21.445 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:21.445 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:21.445 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:23:21.445 ************************************ 00:23:21.445 START TEST nvmf_delete_subsystem 00:23:21.445 ************************************ 00:23:21.445 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:23:21.445 * Looking for test storage... 00:23:21.445 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:21.445 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:21.445 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:23:21.445 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:21.705 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:21.705 17:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:21.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.705 --rc genhtml_branch_coverage=1 00:23:21.705 --rc genhtml_function_coverage=1 00:23:21.705 --rc genhtml_legend=1 00:23:21.705 --rc geninfo_all_blocks=1 00:23:21.705 --rc geninfo_unexecuted_blocks=1 00:23:21.705 00:23:21.705 ' 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:21.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.705 --rc genhtml_branch_coverage=1 00:23:21.705 --rc genhtml_function_coverage=1 00:23:21.705 --rc genhtml_legend=1 00:23:21.705 --rc geninfo_all_blocks=1 00:23:21.705 --rc geninfo_unexecuted_blocks=1 00:23:21.705 00:23:21.705 ' 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:21.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.705 --rc genhtml_branch_coverage=1 00:23:21.705 --rc genhtml_function_coverage=1 00:23:21.705 --rc genhtml_legend=1 00:23:21.705 --rc geninfo_all_blocks=1 00:23:21.705 --rc geninfo_unexecuted_blocks=1 00:23:21.705 00:23:21.705 ' 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:21.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.705 --rc genhtml_branch_coverage=1 00:23:21.705 --rc genhtml_function_coverage=1 00:23:21.705 --rc genhtml_legend=1 00:23:21.705 --rc geninfo_all_blocks=1 00:23:21.705 --rc geninfo_unexecuted_blocks=1 00:23:21.705 00:23:21.705 ' 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.705 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:21.706 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:21.707 Cannot find device "nvmf_init_br" 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:21.707 Cannot find device "nvmf_init_br2" 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:21.707 Cannot find device "nvmf_tgt_br" 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:21.707 Cannot find device "nvmf_tgt_br2" 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:21.707 Cannot find device "nvmf_init_br" 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:21.707 Cannot find device "nvmf_init_br2" 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:21.707 Cannot find device "nvmf_tgt_br" 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:21.707 Cannot find device "nvmf_tgt_br2" 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:21.707 Cannot find device "nvmf_br" 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:21.707 Cannot find device "nvmf_init_if" 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:21.707 Cannot find device "nvmf_init_if2" 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:21.707 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:21.707 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:21.707 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:21.965 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:21.965 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:21.965 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:21.965 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:21.965 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:21.965 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:21.965 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:21.965 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:21.965 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:21.965 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:21.965 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:21.965 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:21.965 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:21.965 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:21.965 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:21.965 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:21.965 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:21.965 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:21.965 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:21.965 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:21.965 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:21.965 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:21.965 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:21.965 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:21.965 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:21.965 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:21.965 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:21.965 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:21.966 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:21.966 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:23:21.966 00:23:21.966 --- 10.0.0.3 ping statistics --- 00:23:21.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.966 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:23:21.966 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:21.966 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:21.966 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:23:21.966 00:23:21.966 --- 10.0.0.4 ping statistics --- 00:23:21.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.966 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:23:21.966 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:21.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:21.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:23:21.966 00:23:21.966 --- 10.0.0.1 ping statistics --- 00:23:21.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.966 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:23:21.966 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:21.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:21.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:23:21.966 00:23:21.966 --- 10.0.0.2 ping statistics --- 00:23:21.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.966 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:23:21.966 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:21.966 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:23:21.966 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:21.966 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:21.966 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:21.966 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:21.966 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:21.966 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:21.966 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:21.966 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:23:21.966 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:21.966 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:21.966 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:21.966 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=101111 00:23:21.966 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 101111 00:23:21.966 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:23:21.966 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 101111 ']' 00:23:21.966 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.966 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:21.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.966 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.966 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:21.966 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:22.225 [2024-11-19 17:22:14.558992] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:23:22.225 [2024-11-19 17:22:14.560335] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:23:22.225 [2024-11-19 17:22:14.560404] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.225 [2024-11-19 17:22:14.714608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:22.484 [2024-11-19 17:22:14.773907] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.484 [2024-11-19 17:22:14.773987] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.484 [2024-11-19 17:22:14.774001] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:22.484 [2024-11-19 17:22:14.774012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:22.484 [2024-11-19 17:22:14.774022] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.484 [2024-11-19 17:22:14.775560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.484 [2024-11-19 17:22:14.775589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.484 [2024-11-19 17:22:14.913979] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:22.484 [2024-11-19 17:22:14.914371] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:23:22.484 [2024-11-19 17:22:14.914493] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:23:22.484 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:22.484 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:23:22.484 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:22.484 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:22.484 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:22.484 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.484 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:22.484 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.484 17:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:22.484 [2024-11-19 17:22:15.004796] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.743 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.743 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:23:22.743 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.743 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:22.743 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.743 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:22.743 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.743 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:22.743 [2024-11-19 17:22:15.033278] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:22.743 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.743 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:23:22.743 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.743 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:22.743 NULL1 00:23:22.743 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.743 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:23:22.743 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.743 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:22.743 Delay0 00:23:22.743 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.743 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:22.743 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.743 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:22.743 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.743 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=101154 00:23:22.743 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:23:22.743 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:23:22.743 [2024-11-19 17:22:15.253762] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:24.641 17:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:24.641 17:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.641 17:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 Write completed with error (sct=0, sc=8) 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 starting I/O failed: -6 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 Write completed with error (sct=0, sc=8) 00:23:24.899 Write completed with error (sct=0, sc=8) 00:23:24.899 starting I/O failed: -6 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 Write completed with error (sct=0, sc=8) 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 starting I/O failed: -6 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 starting I/O failed: -6 00:23:24.899 Write completed with error (sct=0, sc=8) 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 starting I/O failed: -6 00:23:24.899 Write completed with error (sct=0, sc=8) 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 starting I/O failed: -6 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 Write completed with error (sct=0, sc=8) 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 starting I/O failed: -6 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 Write completed with error (sct=0, sc=8) 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 starting I/O failed: -6 00:23:24.899 Write completed with error (sct=0, sc=8) 00:23:24.899 Write completed with error (sct=0, sc=8) 00:23:24.899 Write completed with error (sct=0, sc=8) 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 starting I/O failed: -6 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 Write completed with error (sct=0, sc=8) 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 starting I/O failed: -6 00:23:24.899 Write completed with error (sct=0, sc=8) 00:23:24.899 Write completed with error (sct=0, sc=8) 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 Write completed with error (sct=0, sc=8) 00:23:24.899 starting I/O failed: -6 00:23:24.899 Read completed with error (sct=0, sc=8) 00:23:24.899 Write completed with error (sct=0, sc=8) 00:23:24.899 Write completed with error (sct=0, sc=8) 00:23:24.900 [2024-11-19 17:22:17.300768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3e7e0 is same with the state(6) to be set 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 starting I/O failed: -6 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 starting I/O failed: -6 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 starting I/O failed: -6 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 starting I/O failed: -6 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 starting I/O failed: -6 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 starting I/O failed: -6 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 starting I/O failed: -6 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 starting I/O failed: -6 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 starting I/O failed: -6 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 starting I/O failed: -6 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 starting I/O failed: -6 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 [2024-11-19 17:22:17.301320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f88fc000c40 is same with the state(6) to be set 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Write completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.900 Read completed with error (sct=0, sc=8) 00:23:24.901 Read completed with error (sct=0, sc=8) 00:23:24.901 Write completed with error (sct=0, sc=8) 00:23:24.901 Read completed with error (sct=0, sc=8) 00:23:24.901 Read completed with error (sct=0, sc=8) 00:23:24.901 Read completed with error (sct=0, sc=8) 00:23:24.901 Write completed with error (sct=0, sc=8) 00:23:24.901 Read completed with error (sct=0, sc=8) 00:23:24.901 Read completed with error (sct=0, sc=8) 00:23:24.901 Write completed with error (sct=0, sc=8) 00:23:24.901 Read completed with error (sct=0, sc=8) 00:23:24.901 Read completed with error (sct=0, sc=8) 00:23:24.901 Read completed with error (sct=0, sc=8) 00:23:24.901 Read completed with error (sct=0, sc=8) 00:23:24.901 Write completed with error (sct=0, sc=8) 00:23:24.901 Write completed with error (sct=0, sc=8) 00:23:24.901 Read completed with error (sct=0, sc=8) 00:23:25.834 [2024-11-19 17:22:18.270585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b39ee0 is same with the state(6) to be set 00:23:25.834 Read completed with error (sct=0, sc=8) 00:23:25.834 Read completed with error (sct=0, sc=8) 00:23:25.834 Read completed with error (sct=0, sc=8) 00:23:25.834 Write completed with error (sct=0, sc=8) 00:23:25.834 Read completed with error (sct=0, sc=8) 00:23:25.834 Read completed with error (sct=0, sc=8) 00:23:25.834 Read completed with error (sct=0, sc=8) 00:23:25.834 Read completed with error (sct=0, sc=8) 00:23:25.834 Read completed with error (sct=0, sc=8) 00:23:25.834 Read completed with error (sct=0, sc=8) 00:23:25.834 Read completed with error (sct=0, sc=8) 00:23:25.834 Write completed with error (sct=0, sc=8) 00:23:25.834 Read completed with error (sct=0, sc=8) 00:23:25.834 Read completed with error (sct=0, sc=8) 00:23:25.834 Write completed with error (sct=0, sc=8) 00:23:25.834 Read completed with error (sct=0, sc=8) 00:23:25.834 Read completed with error (sct=0, sc=8) 00:23:25.834 Read completed with error (sct=0, sc=8) 00:23:25.834 Write completed with error (sct=0, sc=8) 00:23:25.834 Write completed with error (sct=0, sc=8) 00:23:25.834 Read completed with error (sct=0, sc=8) 00:23:25.834 Read completed with error (sct=0, sc=8) 00:23:25.834 Read completed with error (sct=0, sc=8) 00:23:25.834 Write completed with error (sct=0, sc=8) 00:23:25.834 Read completed with error (sct=0, sc=8) 00:23:25.834 [2024-11-19 17:22:18.299227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3da50 is same with the state(6) to be set 00:23:25.834 Read completed with error (sct=0, sc=8) 00:23:25.834 Read completed with error (sct=0, sc=8) 00:23:25.834 Read completed with error (sct=0, sc=8) 00:23:25.834 Read completed with error (sct=0, sc=8) 00:23:25.834 Write completed with error (sct=0, sc=8) 00:23:25.834 Read completed with error (sct=0, sc=8) 00:23:25.834 Write completed with error (sct=0, sc=8) 00:23:25.834 Write completed with error (sct=0, sc=8) 00:23:25.834 Read completed with error (sct=0, sc=8) 00:23:25.834 Read completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Write completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Write completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Write completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 [2024-11-19 17:22:18.299691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b40ea0 is same with the state(6) to be set 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Write completed with error (sct=0, sc=8) 00:23:25.835 Write completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Write completed with error (sct=0, sc=8) 00:23:25.835 Write completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Write completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Write completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Write completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Write completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Write completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Write completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Write completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 [2024-11-19 17:22:18.300914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f88fc00d680 is same with the state(6) to be set 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Write completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Write completed with error (sct=0, sc=8) 00:23:25.835 Write completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Write completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Write completed with error (sct=0, sc=8) 00:23:25.835 Write completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Write completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Write completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 Write completed with error (sct=0, sc=8) 00:23:25.835 Write completed with error (sct=0, sc=8) 00:23:25.835 Read completed with error (sct=0, sc=8) 00:23:25.835 [2024-11-19 17:22:18.303691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f88fc00d020 is same with the state(6) to be set 00:23:25.835 Initializing NVMe Controllers 00:23:25.835 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:25.835 Controller IO queue size 128, less than required. 00:23:25.835 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:25.835 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:23:25.835 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:23:25.835 Initialization complete. Launching workers. 00:23:25.835 ======================================================== 00:23:25.835 Latency(us) 00:23:25.835 Device Information : IOPS MiB/s Average min max 00:23:25.835 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.43 0.08 895971.28 1861.78 1021753.26 00:23:25.835 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 169.94 0.08 895904.30 387.77 1023499.46 00:23:25.835 ======================================================== 00:23:25.835 Total : 340.37 0.17 895937.84 387.77 1023499.46 00:23:25.835 00:23:25.835 [2024-11-19 17:22:18.304814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b39ee0 (9): Bad file descriptor 00:23:25.835 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:25.835 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.835 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:23:25.835 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 101154 00:23:25.835 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:23:26.401 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 101154 00:23:26.402 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (101154) - No such process 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 101154 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 101154 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 101154 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:26.402 [2024-11-19 17:22:18.833080] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=101200 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101200 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:23:26.402 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:23:26.659 [2024-11-19 17:22:19.004483] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:26.917 17:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:23:26.917 17:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101200 00:23:26.917 17:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:23:27.480 17:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:23:27.480 17:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101200 00:23:27.480 17:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:23:28.044 17:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:23:28.044 17:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101200 00:23:28.044 17:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:23:28.609 17:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:23:28.609 17:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101200 00:23:28.609 17:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:23:28.866 17:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:23:28.866 17:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101200 00:23:28.866 17:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:23:29.430 17:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:23:29.430 17:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101200 00:23:29.430 17:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:23:29.687 Initializing NVMe Controllers 00:23:29.687 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:29.687 Controller IO queue size 128, less than required. 00:23:29.687 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:29.687 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:23:29.687 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:23:29.687 Initialization complete. Launching workers. 00:23:29.687 ======================================================== 00:23:29.687 Latency(us) 00:23:29.687 Device Information : IOPS MiB/s Average min max 00:23:29.687 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005134.73 1000209.96 1015548.90 00:23:29.687 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1008118.64 1000176.39 1017072.12 00:23:29.687 ======================================================== 00:23:29.687 Total : 256.00 0.12 1006626.69 1000176.39 1017072.12 00:23:29.687 00:23:29.945 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:23:29.945 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101200 00:23:29.945 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (101200) - No such process 00:23:29.945 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 101200 00:23:29.945 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:29.945 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:23:29.945 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:29.945 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:23:29.945 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:29.945 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:23:29.945 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:29.945 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:29.945 rmmod nvme_tcp 00:23:29.945 rmmod nvme_fabrics 00:23:29.945 rmmod nvme_keyring 00:23:30.203 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:30.203 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:23:30.203 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:23:30.203 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 101111 ']' 00:23:30.203 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 101111 00:23:30.203 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 101111 ']' 00:23:30.203 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 101111 00:23:30.203 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:23:30.203 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.203 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101111 00:23:30.203 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:30.203 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:30.203 killing process with pid 101111 00:23:30.203 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101111' 00:23:30.203 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 101111 00:23:30.203 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 101111 00:23:30.462 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:30.462 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:30.462 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:30.462 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:23:30.462 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:23:30.462 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:30.462 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:23:30.462 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:30.462 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:30.462 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:30.462 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:30.462 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:30.462 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:30.462 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:30.462 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:30.462 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:30.462 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:30.462 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:30.462 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:30.462 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:30.462 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:30.462 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:30.462 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:30.462 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.462 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.462 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.720 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:23:30.720 00:23:30.720 real 0m9.182s 00:23:30.720 user 0m24.962s 00:23:30.720 sys 0m1.761s 00:23:30.720 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:30.720 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:30.720 ************************************ 00:23:30.720 END TEST nvmf_delete_subsystem 00:23:30.720 ************************************ 00:23:30.720 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:23:30.720 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:30.720 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:30.720 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:23:30.720 ************************************ 00:23:30.720 START TEST nvmf_host_management 00:23:30.720 ************************************ 00:23:30.720 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:23:30.720 * Looking for test storage... 00:23:30.720 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:30.720 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:30.720 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:23:30.720 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:30.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.980 --rc genhtml_branch_coverage=1 00:23:30.980 --rc genhtml_function_coverage=1 00:23:30.980 --rc genhtml_legend=1 00:23:30.980 --rc geninfo_all_blocks=1 00:23:30.980 --rc geninfo_unexecuted_blocks=1 00:23:30.980 00:23:30.980 ' 00:23:30.980 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:30.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.980 --rc genhtml_branch_coverage=1 00:23:30.980 --rc genhtml_function_coverage=1 00:23:30.980 --rc genhtml_legend=1 00:23:30.980 --rc geninfo_all_blocks=1 00:23:30.980 --rc geninfo_unexecuted_blocks=1 00:23:30.980 00:23:30.980 ' 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:30.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.981 --rc genhtml_branch_coverage=1 00:23:30.981 --rc genhtml_function_coverage=1 00:23:30.981 --rc genhtml_legend=1 00:23:30.981 --rc geninfo_all_blocks=1 00:23:30.981 --rc geninfo_unexecuted_blocks=1 00:23:30.981 00:23:30.981 ' 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:30.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.981 --rc genhtml_branch_coverage=1 00:23:30.981 --rc genhtml_function_coverage=1 00:23:30.981 --rc genhtml_legend=1 00:23:30.981 --rc geninfo_all_blocks=1 00:23:30.981 --rc geninfo_unexecuted_blocks=1 00:23:30.981 00:23:30.981 ' 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:30.981 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:30.982 Cannot find device "nvmf_init_br" 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:30.982 Cannot find device "nvmf_init_br2" 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:30.982 Cannot find device "nvmf_tgt_br" 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:30.982 Cannot find device "nvmf_tgt_br2" 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:30.982 Cannot find device "nvmf_init_br" 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:30.982 Cannot find device "nvmf_init_br2" 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:30.982 Cannot find device "nvmf_tgt_br" 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:30.982 Cannot find device "nvmf_tgt_br2" 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:30.982 Cannot find device "nvmf_br" 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:30.982 Cannot find device "nvmf_init_if" 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:30.982 Cannot find device "nvmf_init_if2" 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:30.982 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:30.982 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:30.982 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:31.239 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:31.239 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 00:23:31.239 00:23:31.239 --- 10.0.0.3 ping statistics --- 00:23:31.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.239 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:31.239 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:31.239 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:23:31.239 00:23:31.239 --- 10.0.0.4 ping statistics --- 00:23:31.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.239 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:31.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:31.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:23:31.239 00:23:31.239 --- 10.0.0.1 ping statistics --- 00:23:31.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.239 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:31.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:31.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:23:31.239 00:23:31.239 --- 10.0.0.2 ping statistics --- 00:23:31.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.239 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=101480 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 101480 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 101480 ']' 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:31.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:31.239 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:31.497 [2024-11-19 17:22:23.834734] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:23:31.497 [2024-11-19 17:22:23.836073] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:23:31.497 [2024-11-19 17:22:23.836159] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:31.497 [2024-11-19 17:22:23.992503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:31.756 [2024-11-19 17:22:24.050626] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:31.756 [2024-11-19 17:22:24.050695] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:31.756 [2024-11-19 17:22:24.050710] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:31.756 [2024-11-19 17:22:24.050720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:31.756 [2024-11-19 17:22:24.050730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:31.756 [2024-11-19 17:22:24.052109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:31.756 [2024-11-19 17:22:24.052251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:31.756 [2024-11-19 17:22:24.052325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.756 [2024-11-19 17:22:24.052324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:31.756 [2024-11-19 17:22:24.157401] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:23:31.756 [2024-11-19 17:22:24.157691] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:23:31.756 [2024-11-19 17:22:24.158056] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:23:31.756 [2024-11-19 17:22:24.158416] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:31.756 [2024-11-19 17:22:24.158757] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:23:31.756 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:31.756 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:23:31.756 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:31.756 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:31.756 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:31.756 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.756 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:31.756 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.756 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:31.756 [2024-11-19 17:22:24.253384] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.756 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.756 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:23:31.756 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:31.756 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:31.756 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:23:32.015 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:23:32.015 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:23:32.015 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.015 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:32.015 Malloc0 00:23:32.015 [2024-11-19 17:22:24.349495] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:32.015 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.015 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:23:32.015 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:32.015 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:32.015 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=101539 00:23:32.015 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 101539 /var/tmp/bdevperf.sock 00:23:32.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.015 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 101539 ']' 00:23:32.015 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.015 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:32.015 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.015 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:23:32.015 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.015 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.015 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:23:32.015 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:32.015 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:23:32.015 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:32.015 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:32.015 { 00:23:32.015 "params": { 00:23:32.015 "name": "Nvme$subsystem", 00:23:32.015 "trtype": "$TEST_TRANSPORT", 00:23:32.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.015 "adrfam": "ipv4", 00:23:32.015 "trsvcid": "$NVMF_PORT", 00:23:32.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.015 "hdgst": ${hdgst:-false}, 00:23:32.015 "ddgst": ${ddgst:-false} 00:23:32.015 }, 00:23:32.015 "method": "bdev_nvme_attach_controller" 00:23:32.015 } 00:23:32.015 EOF 00:23:32.015 )") 00:23:32.015 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:23:32.015 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:23:32.015 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:23:32.015 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:32.015 "params": { 00:23:32.015 "name": "Nvme0", 00:23:32.015 "trtype": "tcp", 00:23:32.015 "traddr": "10.0.0.3", 00:23:32.015 "adrfam": "ipv4", 00:23:32.015 "trsvcid": "4420", 00:23:32.015 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:32.015 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:32.015 "hdgst": false, 00:23:32.015 "ddgst": false 00:23:32.015 }, 00:23:32.015 "method": "bdev_nvme_attach_controller" 00:23:32.015 }' 00:23:32.015 [2024-11-19 17:22:24.467752] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:23:32.015 [2024-11-19 17:22:24.468024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101539 ] 00:23:32.273 [2024-11-19 17:22:24.621755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.273 [2024-11-19 17:22:24.681086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.531 Running I/O for 10 seconds... 00:23:32.531 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:32.531 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:23:32.531 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:32.531 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.531 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:32.531 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.531 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:32.531 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:23:32.531 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:32.531 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:23:32.531 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:23:32.531 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:23:32.531 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:23:32.531 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:23:32.531 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:23:32.531 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:23:32.531 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.531 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:32.531 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.531 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:23:32.531 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:23:32.531 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:23:32.790 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:23:32.790 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:23:32.790 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:23:32.790 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:23:32.790 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.790 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:32.790 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.050 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:23:33.050 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:23:33.050 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:23:33.050 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:23:33.050 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:23:33.050 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:23:33.050 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.050 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:33.050 [2024-11-19 17:22:25.337241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdaee0 is same with the state(6) to be set 00:23:33.050 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.050 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:23:33.050 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.050 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:33.050 [2024-11-19 17:22:25.344444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.050 [2024-11-19 17:22:25.344907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.050 [2024-11-19 17:22:25.345022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.050 [2024-11-19 17:22:25.345090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.050 [2024-11-19 17:22:25.345161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.050 [2024-11-19 17:22:25.345257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.050 [2024-11-19 17:22:25.345320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.050 [2024-11-19 17:22:25.345389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.050 [2024-11-19 17:22:25.345458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7660 is same with the state(6) to be set 00:23:33.050 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.050 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:23:33.050 [2024-11-19 17:22:25.352648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.050 [2024-11-19 17:22:25.352775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.050 [2024-11-19 17:22:25.352865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.050 [2024-11-19 17:22:25.352944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.050 [2024-11-19 17:22:25.353011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.050 [2024-11-19 17:22:25.353076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.050 [2024-11-19 17:22:25.353125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.050 [2024-11-19 17:22:25.353193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.050 [2024-11-19 17:22:25.353242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.050 [2024-11-19 17:22:25.353304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.050 [2024-11-19 17:22:25.353359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.050 [2024-11-19 17:22:25.353447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.050 [2024-11-19 17:22:25.353509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.050 [2024-11-19 17:22:25.353569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.050 [2024-11-19 17:22:25.353625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.050 [2024-11-19 17:22:25.353698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.050 [2024-11-19 17:22:25.353789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.050 [2024-11-19 17:22:25.353855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.050 [2024-11-19 17:22:25.353920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.050 [2024-11-19 17:22:25.353983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.050 [2024-11-19 17:22:25.354038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.050 [2024-11-19 17:22:25.354098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.050 [2024-11-19 17:22:25.354167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.050 [2024-11-19 17:22:25.354231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.050 [2024-11-19 17:22:25.354286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.050 [2024-11-19 17:22:25.354349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.050 [2024-11-19 17:22:25.354406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.354467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.354526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.354586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.354642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.354701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.354771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.354839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.354896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.354961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.355009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.355069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.355124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.355192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.355249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.355327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.355384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.355446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.355494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.355547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.355594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.355659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.355715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.355804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.355857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.355925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.355991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.356039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.356108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.356182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.356232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.356289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.356336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.356424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.356485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.356547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.356602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.356665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.356720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.356800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.356861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.356926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.356982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.357051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.357108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.357172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.357227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.357285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.357340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.357403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.357458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.357473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.357483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.357493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.357502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.357510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.357519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.357527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.357536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.357543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.357552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.357560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.357569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.357576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.357585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.357592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.357601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.357609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.357618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.357625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.357634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.357642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.357650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.357658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.357668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.357675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.357684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.051 [2024-11-19 17:22:25.357691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.051 [2024-11-19 17:22:25.357700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.052 [2024-11-19 17:22:25.357707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.052 [2024-11-19 17:22:25.357716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.052 [2024-11-19 17:22:25.357724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.052 [2024-11-19 17:22:25.357734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.052 [2024-11-19 17:22:25.357768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.052 [2024-11-19 17:22:25.357789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.052 [2024-11-19 17:22:25.357797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.052 [2024-11-19 17:22:25.357807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.052 [2024-11-19 17:22:25.357815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.052 [2024-11-19 17:22:25.357824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.052 [2024-11-19 17:22:25.357831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.052 [2024-11-19 17:22:25.357841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.052 [2024-11-19 17:22:25.357851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.052 [2024-11-19 17:22:25.357860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.052 [2024-11-19 17:22:25.357868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.052 [2024-11-19 17:22:25.357877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.052 [2024-11-19 17:22:25.357884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.052 [2024-11-19 17:22:25.357893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.052 [2024-11-19 17:22:25.357901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.052 [2024-11-19 17:22:25.357909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.052 [2024-11-19 17:22:25.357917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.052 [2024-11-19 17:22:25.357933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.052 [2024-11-19 17:22:25.357944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.052 [2024-11-19 17:22:25.358083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c7660 (9): Bad file descriptor 00:23:33.052 task offset: 98304 on job bdev=Nvme0n1 fails 00:23:33.052 00:23:33.052 Latency(us) 00:23:33.052 [2024-11-19T17:22:25.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.052 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:33.052 Job: Nvme0n1 ended in about 0.46 seconds with error 00:23:33.052 Verification LBA range: start 0x0 length 0x400 00:23:33.052 Nvme0n1 : 0.46 1664.01 104.00 138.67 0.00 34137.46 5749.29 42896.29 00:23:33.052 [2024-11-19T17:22:25.585Z] =================================================================================================================== 00:23:33.052 [2024-11-19T17:22:25.585Z] Total : 1664.01 104.00 138.67 0.00 34137.46 5749.29 42896.29 00:23:33.052 [2024-11-19 17:22:25.359005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:33.052 [2024-11-19 17:22:25.360523] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:33.052 [2024-11-19 17:22:25.363394] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:33.987 17:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 101539 00:23:33.987 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (101539) - No such process 00:23:33.987 17:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:23:33.987 17:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:23:33.987 17:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:33.987 17:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:23:33.987 17:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:23:33.987 17:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:23:33.987 17:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.987 17:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.987 { 00:23:33.987 "params": { 00:23:33.987 "name": "Nvme$subsystem", 00:23:33.987 "trtype": "$TEST_TRANSPORT", 00:23:33.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.987 "adrfam": "ipv4", 00:23:33.987 "trsvcid": "$NVMF_PORT", 00:23:33.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.987 "hdgst": ${hdgst:-false}, 00:23:33.987 "ddgst": ${ddgst:-false} 00:23:33.987 }, 00:23:33.987 "method": "bdev_nvme_attach_controller" 00:23:33.987 } 00:23:33.987 EOF 00:23:33.987 )") 00:23:33.987 17:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:23:33.988 17:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:23:33.988 17:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:23:33.988 17:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:33.988 "params": { 00:23:33.988 "name": "Nvme0", 00:23:33.988 "trtype": "tcp", 00:23:33.988 "traddr": "10.0.0.3", 00:23:33.988 "adrfam": "ipv4", 00:23:33.988 "trsvcid": "4420", 00:23:33.988 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:33.988 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:33.988 "hdgst": false, 00:23:33.988 "ddgst": false 00:23:33.988 }, 00:23:33.988 "method": "bdev_nvme_attach_controller" 00:23:33.988 }' 00:23:33.988 [2024-11-19 17:22:26.418683] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:23:33.988 [2024-11-19 17:22:26.418829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101585 ] 00:23:34.246 [2024-11-19 17:22:26.566022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.246 [2024-11-19 17:22:26.607886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.505 Running I/O for 1 seconds... 00:23:35.440 1770.00 IOPS, 110.62 MiB/s 00:23:35.440 Latency(us) 00:23:35.440 [2024-11-19T17:22:27.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.440 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:35.440 Verification LBA range: start 0x0 length 0x400 00:23:35.440 Nvme0n1 : 1.03 1804.11 112.76 0.00 0.00 34859.52 4974.78 32887.16 00:23:35.440 [2024-11-19T17:22:27.973Z] =================================================================================================================== 00:23:35.440 [2024-11-19T17:22:27.973Z] Total : 1804.11 112.76 0.00 0.00 34859.52 4974.78 32887.16 00:23:35.698 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:23:35.698 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:23:35.698 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:23:35.698 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:23:35.698 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:23:35.698 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:35.698 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:23:35.698 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:35.698 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:23:35.698 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:35.698 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:35.698 rmmod nvme_tcp 00:23:35.698 rmmod nvme_fabrics 00:23:35.698 rmmod nvme_keyring 00:23:35.698 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:35.698 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:23:35.699 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:23:35.699 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 101480 ']' 00:23:35.699 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 101480 00:23:35.699 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 101480 ']' 00:23:35.699 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 101480 00:23:35.699 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:23:35.699 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:35.699 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101480 00:23:35.957 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:35.957 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:35.957 killing process with pid 101480 00:23:35.957 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101480' 00:23:35.957 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 101480 00:23:35.957 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 101480 00:23:36.214 [2024-11-19 17:22:28.491141] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:23:36.214 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:36.214 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:36.214 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:36.214 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:23:36.214 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:36.214 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:23:36.214 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:23:36.214 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:36.214 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:36.214 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:36.214 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:36.214 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:36.214 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:36.214 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:36.214 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:36.214 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:36.214 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:36.214 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:36.214 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:36.214 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:36.214 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:36.214 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:36.214 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:36.214 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.214 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.214 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:23:36.472 00:23:36.472 real 0m5.689s 00:23:36.472 user 0m18.193s 00:23:36.472 sys 0m2.188s 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:36.472 ************************************ 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:36.472 END TEST nvmf_host_management 00:23:36.472 ************************************ 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:23:36.472 ************************************ 00:23:36.472 START TEST nvmf_lvol 00:23:36.472 ************************************ 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:23:36.472 * Looking for test storage... 00:23:36.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:23:36.472 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:36.472 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:36.472 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:23:36.472 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:36.472 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:36.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.472 --rc genhtml_branch_coverage=1 00:23:36.472 --rc genhtml_function_coverage=1 00:23:36.472 --rc genhtml_legend=1 00:23:36.472 --rc geninfo_all_blocks=1 00:23:36.472 --rc geninfo_unexecuted_blocks=1 00:23:36.472 00:23:36.472 ' 00:23:36.472 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:36.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.472 --rc genhtml_branch_coverage=1 00:23:36.472 --rc genhtml_function_coverage=1 00:23:36.472 --rc genhtml_legend=1 00:23:36.472 --rc geninfo_all_blocks=1 00:23:36.472 --rc geninfo_unexecuted_blocks=1 00:23:36.472 00:23:36.472 ' 00:23:36.472 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:36.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.472 --rc genhtml_branch_coverage=1 00:23:36.472 --rc genhtml_function_coverage=1 00:23:36.472 --rc genhtml_legend=1 00:23:36.472 --rc geninfo_all_blocks=1 00:23:36.472 --rc geninfo_unexecuted_blocks=1 00:23:36.472 00:23:36.472 ' 00:23:36.472 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:36.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.472 --rc genhtml_branch_coverage=1 00:23:36.472 --rc genhtml_function_coverage=1 00:23:36.472 --rc genhtml_legend=1 00:23:36.472 --rc geninfo_all_blocks=1 00:23:36.472 --rc geninfo_unexecuted_blocks=1 00:23:36.472 00:23:36.472 ' 00:23:36.472 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:36.472 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.731 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:36.732 Cannot find device "nvmf_init_br" 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:36.732 Cannot find device "nvmf_init_br2" 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:36.732 Cannot find device "nvmf_tgt_br" 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:36.732 Cannot find device "nvmf_tgt_br2" 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:36.732 Cannot find device "nvmf_init_br" 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:36.732 Cannot find device "nvmf_init_br2" 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:36.732 Cannot find device "nvmf_tgt_br" 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:36.732 Cannot find device "nvmf_tgt_br2" 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:36.732 Cannot find device "nvmf_br" 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:36.732 Cannot find device "nvmf_init_if" 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:36.732 Cannot find device "nvmf_init_if2" 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:36.732 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:36.732 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:36.732 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:36.990 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:36.990 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:23:36.990 00:23:36.990 --- 10.0.0.3 ping statistics --- 00:23:36.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.990 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:36.990 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:36.990 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:23:36.990 00:23:36.990 --- 10.0.0.4 ping statistics --- 00:23:36.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.990 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:36.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:36.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:23:36.990 00:23:36.990 --- 10.0.0.1 ping statistics --- 00:23:36.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.990 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:36.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:36.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:23:36.990 00:23:36.990 --- 10.0.0.2 ping statistics --- 00:23:36.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.990 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=101849 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 101849 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 101849 ']' 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:36.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.990 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:36.991 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:23:36.991 [2024-11-19 17:22:29.519625] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:23:36.991 [2024-11-19 17:22:29.520967] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:23:36.991 [2024-11-19 17:22:29.521032] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.248 [2024-11-19 17:22:29.676777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:37.248 [2024-11-19 17:22:29.744451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.248 [2024-11-19 17:22:29.744525] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.248 [2024-11-19 17:22:29.744541] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.248 [2024-11-19 17:22:29.744553] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.248 [2024-11-19 17:22:29.744562] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.249 [2024-11-19 17:22:29.746088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.249 [2024-11-19 17:22:29.746234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:37.249 [2024-11-19 17:22:29.746245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.507 [2024-11-19 17:22:29.884118] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:37.507 [2024-11-19 17:22:29.884179] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:23:37.507 [2024-11-19 17:22:29.884590] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:23:37.507 [2024-11-19 17:22:29.885158] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:23:37.507 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.507 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:23:37.507 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:37.507 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:37.507 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:23:37.507 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.507 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:37.765 [2024-11-19 17:22:30.239339] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.765 17:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:38.024 17:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:23:38.024 17:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:38.285 17:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:23:38.285 17:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:23:38.543 17:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:23:38.801 17:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=6d68be38-19ac-4926-a7db-341da642db8d 00:23:39.059 17:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6d68be38-19ac-4926-a7db-341da642db8d lvol 20 00:23:39.317 17:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f1e2c997-28ac-4b15-9ff7-289b2dc20880 00:23:39.317 17:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:23:39.585 17:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f1e2c997-28ac-4b15-9ff7-289b2dc20880 00:23:39.842 17:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:40.100 [2024-11-19 17:22:32.443340] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:40.100 17:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:23:40.357 17:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=101983 00:23:40.357 17:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:23:40.357 17:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:23:41.291 17:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot f1e2c997-28ac-4b15-9ff7-289b2dc20880 MY_SNAPSHOT 00:23:41.561 17:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e88ba091-e0a3-4b7a-8266-d56b40bc4d7b 00:23:41.561 17:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize f1e2c997-28ac-4b15-9ff7-289b2dc20880 30 00:23:41.843 17:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone e88ba091-e0a3-4b7a-8266-d56b40bc4d7b MY_CLONE 00:23:42.100 17:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=06015061-22bf-418b-8b40-197d900cf5c6 00:23:42.100 17:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 06015061-22bf-418b-8b40-197d900cf5c6 00:23:42.665 17:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 101983 00:23:50.776 Initializing NVMe Controllers 00:23:50.776 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:23:50.776 Controller IO queue size 128, less than required. 00:23:50.776 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:50.776 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:23:50.776 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:23:50.776 Initialization complete. Launching workers. 00:23:50.776 ======================================================== 00:23:50.776 Latency(us) 00:23:50.776 Device Information : IOPS MiB/s Average min max 00:23:50.776 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12530.70 48.95 10220.00 3425.86 49207.63 00:23:50.776 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12260.50 47.89 10444.57 4376.89 50667.66 00:23:50.776 ======================================================== 00:23:50.776 Total : 24791.20 96.84 10331.06 3425.86 50667.66 00:23:50.776 00:23:50.776 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:50.776 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f1e2c997-28ac-4b15-9ff7-289b2dc20880 00:23:51.033 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6d68be38-19ac-4926-a7db-341da642db8d 00:23:51.292 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:23:51.292 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:23:51.292 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:23:51.292 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:51.292 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:23:51.292 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:51.292 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:23:51.292 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:51.292 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:51.292 rmmod nvme_tcp 00:23:51.292 rmmod nvme_fabrics 00:23:51.292 rmmod nvme_keyring 00:23:51.292 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:51.292 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:23:51.292 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:23:51.292 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 101849 ']' 00:23:51.292 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 101849 00:23:51.292 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 101849 ']' 00:23:51.292 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 101849 00:23:51.292 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:23:51.292 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:51.292 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101849 00:23:51.292 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:51.292 killing process with pid 101849 00:23:51.292 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:51.292 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101849' 00:23:51.292 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 101849 00:23:51.292 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 101849 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:23:51.860 00:23:51.860 real 0m15.527s 00:23:51.860 user 0m55.212s 00:23:51.860 sys 0m5.800s 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:51.860 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:23:51.860 ************************************ 00:23:51.860 END TEST nvmf_lvol 00:23:51.860 ************************************ 00:23:52.120 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:23:52.120 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:52.120 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:52.120 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:23:52.120 ************************************ 00:23:52.120 START TEST nvmf_lvs_grow 00:23:52.120 ************************************ 00:23:52.120 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:23:52.120 * Looking for test storage... 00:23:52.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:52.120 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:52.120 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:23:52.120 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:52.120 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:52.120 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:52.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.121 --rc genhtml_branch_coverage=1 00:23:52.121 --rc genhtml_function_coverage=1 00:23:52.121 --rc genhtml_legend=1 00:23:52.121 --rc geninfo_all_blocks=1 00:23:52.121 --rc geninfo_unexecuted_blocks=1 00:23:52.121 00:23:52.121 ' 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:52.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.121 --rc genhtml_branch_coverage=1 00:23:52.121 --rc genhtml_function_coverage=1 00:23:52.121 --rc genhtml_legend=1 00:23:52.121 --rc geninfo_all_blocks=1 00:23:52.121 --rc geninfo_unexecuted_blocks=1 00:23:52.121 00:23:52.121 ' 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:52.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.121 --rc genhtml_branch_coverage=1 00:23:52.121 --rc genhtml_function_coverage=1 00:23:52.121 --rc genhtml_legend=1 00:23:52.121 --rc geninfo_all_blocks=1 00:23:52.121 --rc geninfo_unexecuted_blocks=1 00:23:52.121 00:23:52.121 ' 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:52.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.121 --rc genhtml_branch_coverage=1 00:23:52.121 --rc genhtml_function_coverage=1 00:23:52.121 --rc genhtml_legend=1 00:23:52.121 --rc geninfo_all_blocks=1 00:23:52.121 --rc geninfo_unexecuted_blocks=1 00:23:52.121 00:23:52.121 ' 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:52.121 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:52.122 Cannot find device "nvmf_init_br" 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:23:52.122 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:52.381 Cannot find device "nvmf_init_br2" 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:52.381 Cannot find device "nvmf_tgt_br" 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:52.381 Cannot find device "nvmf_tgt_br2" 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:52.381 Cannot find device "nvmf_init_br" 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:52.381 Cannot find device "nvmf_init_br2" 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:52.381 Cannot find device "nvmf_tgt_br" 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:52.381 Cannot find device "nvmf_tgt_br2" 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:52.381 Cannot find device "nvmf_br" 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:52.381 Cannot find device "nvmf_init_if" 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:52.381 Cannot find device "nvmf_init_if2" 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:52.381 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:52.381 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:52.381 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:52.641 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:52.641 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:52.641 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:52.641 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:52.641 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:52.641 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:52.641 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:52.641 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:52.641 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:52.641 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:52.641 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:52.641 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:23:52.641 00:23:52.641 --- 10.0.0.3 ping statistics --- 00:23:52.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.641 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:52.641 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:52.641 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:23:52.641 00:23:52.641 --- 10.0.0.4 ping statistics --- 00:23:52.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.641 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:52.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:23:52.641 00:23:52.641 --- 10.0.0.1 ping statistics --- 00:23:52.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.641 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:52.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:23:52.641 00:23:52.641 --- 10.0.0.2 ping statistics --- 00:23:52.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.641 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=102392 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 102392 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 102392 ']' 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.641 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:52.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.642 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.642 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:52.642 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:23:52.642 [2024-11-19 17:22:45.151889] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:23:52.642 [2024-11-19 17:22:45.153169] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:23:52.642 [2024-11-19 17:22:45.153759] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.905 [2024-11-19 17:22:45.300783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.905 [2024-11-19 17:22:45.352637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.905 [2024-11-19 17:22:45.352688] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.905 [2024-11-19 17:22:45.352697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.905 [2024-11-19 17:22:45.352705] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.905 [2024-11-19 17:22:45.352711] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.905 [2024-11-19 17:22:45.353098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.164 [2024-11-19 17:22:45.467715] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:53.164 [2024-11-19 17:22:45.468038] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:23:53.164 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.164 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:23:53.164 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:53.164 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:53.164 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:23:53.164 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.164 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:53.422 [2024-11-19 17:22:45.833967] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.422 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:23:53.422 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:53.422 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:53.422 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:23:53.422 ************************************ 00:23:53.422 START TEST lvs_grow_clean 00:23:53.422 ************************************ 00:23:53.422 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:23:53.422 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:23:53.422 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:23:53.422 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:23:53.422 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:23:53.422 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:23:53.422 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:23:53.422 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:53.422 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:53.422 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:23:53.681 17:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:23:53.681 17:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:23:53.940 17:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5c2ef480-76b0-4639-863a-cbe852f96bb7 00:23:53.940 17:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c2ef480-76b0-4639-863a-cbe852f96bb7 00:23:53.940 17:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:23:54.507 17:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:23:54.507 17:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:23:54.507 17:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5c2ef480-76b0-4639-863a-cbe852f96bb7 lvol 150 00:23:54.507 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=8f740cd9-12db-47d3-b7d3-01ce6327c1af 00:23:54.507 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:54.766 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:23:54.766 [2024-11-19 17:22:47.233791] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:23:54.766 [2024-11-19 17:22:47.233934] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:23:54.766 true 00:23:54.766 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c2ef480-76b0-4639-863a-cbe852f96bb7 00:23:54.766 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:23:55.024 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:23:55.024 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:23:55.283 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8f740cd9-12db-47d3-b7d3-01ce6327c1af 00:23:55.541 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:55.799 [2024-11-19 17:22:48.226197] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:55.799 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:23:56.057 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=102541 00:23:56.058 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:23:56.058 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:56.058 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 102541 /var/tmp/bdevperf.sock 00:23:56.058 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 102541 ']' 00:23:56.058 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:56.058 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:56.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:56.058 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:56.058 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:56.058 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:23:56.058 [2024-11-19 17:22:48.524459] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:23:56.058 [2024-11-19 17:22:48.524543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102541 ] 00:23:56.363 [2024-11-19 17:22:48.662469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.363 [2024-11-19 17:22:48.722569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.363 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.363 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:23:56.363 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:23:56.621 Nvme0n1 00:23:56.622 17:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:23:56.881 [ 00:23:56.881 { 00:23:56.881 "aliases": [ 00:23:56.881 "8f740cd9-12db-47d3-b7d3-01ce6327c1af" 00:23:56.881 ], 00:23:56.881 "assigned_rate_limits": { 00:23:56.881 "r_mbytes_per_sec": 0, 00:23:56.881 "rw_ios_per_sec": 0, 00:23:56.881 "rw_mbytes_per_sec": 0, 00:23:56.881 "w_mbytes_per_sec": 0 00:23:56.881 }, 00:23:56.881 "block_size": 4096, 00:23:56.881 "claimed": false, 00:23:56.881 "driver_specific": { 00:23:56.881 "mp_policy": "active_passive", 00:23:56.881 "nvme": [ 00:23:56.881 { 00:23:56.881 "ctrlr_data": { 00:23:56.881 "ana_reporting": false, 00:23:56.881 "cntlid": 1, 00:23:56.881 "firmware_revision": "25.01", 00:23:56.881 "model_number": "SPDK bdev Controller", 00:23:56.881 "multi_ctrlr": true, 00:23:56.881 "oacs": { 00:23:56.881 "firmware": 0, 00:23:56.881 "format": 0, 00:23:56.881 "ns_manage": 0, 00:23:56.881 "security": 0 00:23:56.881 }, 00:23:56.881 "serial_number": "SPDK0", 00:23:56.881 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:56.881 "vendor_id": "0x8086" 00:23:56.881 }, 00:23:56.881 "ns_data": { 00:23:56.881 "can_share": true, 00:23:56.881 "id": 1 00:23:56.881 }, 00:23:56.881 "trid": { 00:23:56.881 "adrfam": "IPv4", 00:23:56.881 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:56.881 "traddr": "10.0.0.3", 00:23:56.881 "trsvcid": "4420", 00:23:56.881 "trtype": "TCP" 00:23:56.881 }, 00:23:56.881 "vs": { 00:23:56.881 "nvme_version": "1.3" 00:23:56.881 } 00:23:56.881 } 00:23:56.881 ] 00:23:56.881 }, 00:23:56.881 "memory_domains": [ 00:23:56.881 { 00:23:56.881 "dma_device_id": "system", 00:23:56.881 "dma_device_type": 1 00:23:56.881 } 00:23:56.881 ], 00:23:56.881 "name": "Nvme0n1", 00:23:56.881 "num_blocks": 38912, 00:23:56.881 "numa_id": -1, 00:23:56.881 "product_name": "NVMe disk", 00:23:56.881 "supported_io_types": { 00:23:56.881 "abort": true, 00:23:56.881 "compare": true, 00:23:56.881 "compare_and_write": true, 00:23:56.881 "copy": true, 00:23:56.881 "flush": true, 00:23:56.881 "get_zone_info": false, 00:23:56.881 "nvme_admin": true, 00:23:56.881 "nvme_io": true, 00:23:56.881 "nvme_io_md": false, 00:23:56.881 "nvme_iov_md": false, 00:23:56.881 "read": true, 00:23:56.881 "reset": true, 00:23:56.881 "seek_data": false, 00:23:56.881 "seek_hole": false, 00:23:56.881 "unmap": true, 00:23:56.881 "write": true, 00:23:56.881 "write_zeroes": true, 00:23:56.881 "zcopy": false, 00:23:56.881 "zone_append": false, 00:23:56.881 "zone_management": false 00:23:56.881 }, 00:23:56.881 "uuid": "8f740cd9-12db-47d3-b7d3-01ce6327c1af", 00:23:56.881 "zoned": false 00:23:56.881 } 00:23:56.881 ] 00:23:56.881 17:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=102569 00:23:56.881 17:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:56.881 17:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:23:57.140 Running I/O for 10 seconds... 00:23:58.076 Latency(us) 00:23:58.076 [2024-11-19T17:22:50.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.076 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:58.076 Nvme0n1 : 1.00 7576.00 29.59 0.00 0.00 0.00 0.00 0.00 00:23:58.077 [2024-11-19T17:22:50.610Z] =================================================================================================================== 00:23:58.077 [2024-11-19T17:22:50.610Z] Total : 7576.00 29.59 0.00 0.00 0.00 0.00 0.00 00:23:58.077 00:23:59.011 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5c2ef480-76b0-4639-863a-cbe852f96bb7 00:23:59.011 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:59.011 Nvme0n1 : 2.00 8334.00 32.55 0.00 0.00 0.00 0.00 0.00 00:23:59.011 [2024-11-19T17:22:51.544Z] =================================================================================================================== 00:23:59.011 [2024-11-19T17:22:51.544Z] Total : 8334.00 32.55 0.00 0.00 0.00 0.00 0.00 00:23:59.011 00:23:59.269 true 00:23:59.269 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c2ef480-76b0-4639-863a-cbe852f96bb7 00:23:59.269 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:23:59.528 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:23:59.528 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:23:59.528 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 102569 00:24:00.095 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:00.095 Nvme0n1 : 3.00 8641.33 33.76 0.00 0.00 0.00 0.00 0.00 00:24:00.095 [2024-11-19T17:22:52.628Z] =================================================================================================================== 00:24:00.095 [2024-11-19T17:22:52.628Z] Total : 8641.33 33.76 0.00 0.00 0.00 0.00 0.00 00:24:00.095 00:24:01.026 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:01.026 Nvme0n1 : 4.00 8783.00 34.31 0.00 0.00 0.00 0.00 0.00 00:24:01.026 [2024-11-19T17:22:53.559Z] =================================================================================================================== 00:24:01.026 [2024-11-19T17:22:53.559Z] Total : 8783.00 34.31 0.00 0.00 0.00 0.00 0.00 00:24:01.026 00:24:01.965 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:01.965 Nvme0n1 : 5.00 8814.60 34.43 0.00 0.00 0.00 0.00 0.00 00:24:01.965 [2024-11-19T17:22:54.498Z] =================================================================================================================== 00:24:01.965 [2024-11-19T17:22:54.498Z] Total : 8814.60 34.43 0.00 0.00 0.00 0.00 0.00 00:24:01.965 00:24:03.339 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:03.339 Nvme0n1 : 6.00 8845.00 34.55 0.00 0.00 0.00 0.00 0.00 00:24:03.339 [2024-11-19T17:22:55.872Z] =================================================================================================================== 00:24:03.339 [2024-11-19T17:22:55.872Z] Total : 8845.00 34.55 0.00 0.00 0.00 0.00 0.00 00:24:03.339 00:24:04.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:04.274 Nvme0n1 : 7.00 8872.86 34.66 0.00 0.00 0.00 0.00 0.00 00:24:04.274 [2024-11-19T17:22:56.807Z] =================================================================================================================== 00:24:04.274 [2024-11-19T17:22:56.807Z] Total : 8872.86 34.66 0.00 0.00 0.00 0.00 0.00 00:24:04.274 00:24:05.209 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:05.209 Nvme0n1 : 8.00 8890.25 34.73 0.00 0.00 0.00 0.00 0.00 00:24:05.209 [2024-11-19T17:22:57.742Z] =================================================================================================================== 00:24:05.209 [2024-11-19T17:22:57.742Z] Total : 8890.25 34.73 0.00 0.00 0.00 0.00 0.00 00:24:05.209 00:24:06.145 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:06.145 Nvme0n1 : 9.00 8897.44 34.76 0.00 0.00 0.00 0.00 0.00 00:24:06.145 [2024-11-19T17:22:58.678Z] =================================================================================================================== 00:24:06.145 [2024-11-19T17:22:58.679Z] Total : 8897.44 34.76 0.00 0.00 0.00 0.00 0.00 00:24:06.146 00:24:07.081 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:07.081 Nvme0n1 : 10.00 8902.70 34.78 0.00 0.00 0.00 0.00 0.00 00:24:07.081 [2024-11-19T17:22:59.614Z] =================================================================================================================== 00:24:07.081 [2024-11-19T17:22:59.614Z] Total : 8902.70 34.78 0.00 0.00 0.00 0.00 0.00 00:24:07.081 00:24:07.081 00:24:07.081 Latency(us) 00:24:07.081 [2024-11-19T17:22:59.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.081 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:07.081 Nvme0n1 : 10.01 8909.74 34.80 0.00 0.00 14362.18 6642.97 62437.93 00:24:07.081 [2024-11-19T17:22:59.614Z] =================================================================================================================== 00:24:07.081 [2024-11-19T17:22:59.614Z] Total : 8909.74 34.80 0.00 0.00 14362.18 6642.97 62437.93 00:24:07.081 { 00:24:07.081 "results": [ 00:24:07.081 { 00:24:07.081 "job": "Nvme0n1", 00:24:07.081 "core_mask": "0x2", 00:24:07.081 "workload": "randwrite", 00:24:07.081 "status": "finished", 00:24:07.081 "queue_depth": 128, 00:24:07.081 "io_size": 4096, 00:24:07.081 "runtime": 10.006465, 00:24:07.081 "iops": 8909.739853184916, 00:24:07.081 "mibps": 34.80367130150358, 00:24:07.081 "io_failed": 0, 00:24:07.081 "io_timeout": 0, 00:24:07.081 "avg_latency_us": 14362.178598294084, 00:24:07.081 "min_latency_us": 6642.967272727273, 00:24:07.081 "max_latency_us": 62437.93454545455 00:24:07.081 } 00:24:07.081 ], 00:24:07.081 "core_count": 1 00:24:07.081 } 00:24:07.081 17:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 102541 00:24:07.081 17:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 102541 ']' 00:24:07.081 17:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 102541 00:24:07.081 17:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:24:07.081 17:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:07.081 17:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102541 00:24:07.081 17:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:07.081 killing process with pid 102541 00:24:07.081 17:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:07.081 17:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102541' 00:24:07.081 Received shutdown signal, test time was about 10.000000 seconds 00:24:07.081 00:24:07.081 Latency(us) 00:24:07.081 [2024-11-19T17:22:59.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.081 [2024-11-19T17:22:59.614Z] =================================================================================================================== 00:24:07.081 [2024-11-19T17:22:59.614Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:07.081 17:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 102541 00:24:07.081 17:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 102541 00:24:07.339 17:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:24:07.598 17:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:07.856 17:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c2ef480-76b0-4639-863a-cbe852f96bb7 00:24:07.856 17:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:24:08.115 17:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:24:08.115 17:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:24:08.115 17:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:24:08.373 [2024-11-19 17:23:00.817850] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:24:08.373 17:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c2ef480-76b0-4639-863a-cbe852f96bb7 00:24:08.373 17:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:24:08.373 17:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c2ef480-76b0-4639-863a-cbe852f96bb7 00:24:08.373 17:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:08.373 17:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:08.373 17:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:08.373 17:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:08.373 17:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:08.373 17:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:08.373 17:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:08.373 17:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:08.373 17:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c2ef480-76b0-4639-863a-cbe852f96bb7 00:24:08.631 2024/11/19 17:23:01 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:5c2ef480-76b0-4639-863a-cbe852f96bb7], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:24:08.631 request: 00:24:08.631 { 00:24:08.631 "method": "bdev_lvol_get_lvstores", 00:24:08.631 "params": { 00:24:08.631 "uuid": "5c2ef480-76b0-4639-863a-cbe852f96bb7" 00:24:08.631 } 00:24:08.631 } 00:24:08.631 Got JSON-RPC error response 00:24:08.631 GoRPCClient: error on JSON-RPC call 00:24:08.631 17:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:24:08.631 17:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:08.631 17:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:08.631 17:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:08.631 17:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:24:08.890 aio_bdev 00:24:08.890 17:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8f740cd9-12db-47d3-b7d3-01ce6327c1af 00:24:08.890 17:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=8f740cd9-12db-47d3-b7d3-01ce6327c1af 00:24:08.890 17:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:08.890 17:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:24:08.890 17:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:08.890 17:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:08.890 17:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:09.148 17:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8f740cd9-12db-47d3-b7d3-01ce6327c1af -t 2000 00:24:09.407 [ 00:24:09.407 { 00:24:09.407 "aliases": [ 00:24:09.407 "lvs/lvol" 00:24:09.407 ], 00:24:09.407 "assigned_rate_limits": { 00:24:09.407 "r_mbytes_per_sec": 0, 00:24:09.407 "rw_ios_per_sec": 0, 00:24:09.407 "rw_mbytes_per_sec": 0, 00:24:09.407 "w_mbytes_per_sec": 0 00:24:09.407 }, 00:24:09.407 "block_size": 4096, 00:24:09.407 "claimed": false, 00:24:09.407 "driver_specific": { 00:24:09.407 "lvol": { 00:24:09.407 "base_bdev": "aio_bdev", 00:24:09.407 "clone": false, 00:24:09.407 "esnap_clone": false, 00:24:09.407 "lvol_store_uuid": "5c2ef480-76b0-4639-863a-cbe852f96bb7", 00:24:09.407 "num_allocated_clusters": 38, 00:24:09.407 "snapshot": false, 00:24:09.407 "thin_provision": false 00:24:09.407 } 00:24:09.407 }, 00:24:09.407 "name": "8f740cd9-12db-47d3-b7d3-01ce6327c1af", 00:24:09.407 "num_blocks": 38912, 00:24:09.407 "product_name": "Logical Volume", 00:24:09.407 "supported_io_types": { 00:24:09.407 "abort": false, 00:24:09.407 "compare": false, 00:24:09.407 "compare_and_write": false, 00:24:09.407 "copy": false, 00:24:09.407 "flush": false, 00:24:09.407 "get_zone_info": false, 00:24:09.407 "nvme_admin": false, 00:24:09.407 "nvme_io": false, 00:24:09.407 "nvme_io_md": false, 00:24:09.407 "nvme_iov_md": false, 00:24:09.407 "read": true, 00:24:09.407 "reset": true, 00:24:09.407 "seek_data": true, 00:24:09.407 "seek_hole": true, 00:24:09.407 "unmap": true, 00:24:09.407 "write": true, 00:24:09.407 "write_zeroes": true, 00:24:09.407 "zcopy": false, 00:24:09.407 "zone_append": false, 00:24:09.407 "zone_management": false 00:24:09.407 }, 00:24:09.407 "uuid": "8f740cd9-12db-47d3-b7d3-01ce6327c1af", 00:24:09.407 "zoned": false 00:24:09.407 } 00:24:09.407 ] 00:24:09.407 17:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:24:09.407 17:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:24:09.407 17:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c2ef480-76b0-4639-863a-cbe852f96bb7 00:24:09.665 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:24:09.665 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:24:09.666 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c2ef480-76b0-4639-863a-cbe852f96bb7 00:24:09.924 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:24:09.924 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8f740cd9-12db-47d3-b7d3-01ce6327c1af 00:24:10.182 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5c2ef480-76b0-4639-863a-cbe852f96bb7 00:24:10.440 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:24:10.699 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:11.266 ************************************ 00:24:11.266 END TEST lvs_grow_clean 00:24:11.266 ************************************ 00:24:11.266 00:24:11.266 real 0m17.692s 00:24:11.266 user 0m16.663s 00:24:11.266 sys 0m2.349s 00:24:11.266 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:11.266 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:24:11.266 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:24:11.267 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:11.267 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:11.267 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:11.267 ************************************ 00:24:11.267 START TEST lvs_grow_dirty 00:24:11.267 ************************************ 00:24:11.267 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:24:11.267 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:24:11.267 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:24:11.267 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:24:11.267 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:24:11.267 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:24:11.267 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:24:11.267 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:11.267 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:11.267 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:24:11.525 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:24:11.525 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:24:11.783 17:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=73cefc2c-9efc-4e85-b49a-e008d2dbf21a 00:24:11.783 17:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73cefc2c-9efc-4e85-b49a-e008d2dbf21a 00:24:11.783 17:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:24:12.046 17:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:24:12.046 17:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:24:12.046 17:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 73cefc2c-9efc-4e85-b49a-e008d2dbf21a lvol 150 00:24:12.311 17:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=fbf50a79-e742-412d-ac0d-23f8f21e12da 00:24:12.311 17:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:12.311 17:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:24:12.570 [2024-11-19 17:23:04.949733] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:24:12.570 [2024-11-19 17:23:04.949914] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:24:12.570 true 00:24:12.570 17:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73cefc2c-9efc-4e85-b49a-e008d2dbf21a 00:24:12.570 17:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:24:12.829 17:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:24:12.829 17:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:24:13.087 17:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fbf50a79-e742-412d-ac0d-23f8f21e12da 00:24:13.087 17:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:13.345 [2024-11-19 17:23:05.854171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:13.345 17:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:24:13.603 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=102957 00:24:13.603 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:24:13.603 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:13.603 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 102957 /var/tmp/bdevperf.sock 00:24:13.603 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 102957 ']' 00:24:13.603 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:13.603 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:13.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:13.603 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:13.603 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:13.603 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:24:13.861 [2024-11-19 17:23:06.149943] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:24:13.861 [2024-11-19 17:23:06.150057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102957 ] 00:24:13.861 [2024-11-19 17:23:06.303360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.861 [2024-11-19 17:23:06.371700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.801 17:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:14.801 17:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:24:14.801 17:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:24:15.060 Nvme0n1 00:24:15.060 17:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:24:15.060 [ 00:24:15.060 { 00:24:15.060 "aliases": [ 00:24:15.060 "fbf50a79-e742-412d-ac0d-23f8f21e12da" 00:24:15.060 ], 00:24:15.060 "assigned_rate_limits": { 00:24:15.060 "r_mbytes_per_sec": 0, 00:24:15.060 "rw_ios_per_sec": 0, 00:24:15.060 "rw_mbytes_per_sec": 0, 00:24:15.060 "w_mbytes_per_sec": 0 00:24:15.060 }, 00:24:15.060 "block_size": 4096, 00:24:15.060 "claimed": false, 00:24:15.060 "driver_specific": { 00:24:15.060 "mp_policy": "active_passive", 00:24:15.060 "nvme": [ 00:24:15.060 { 00:24:15.060 "ctrlr_data": { 00:24:15.060 "ana_reporting": false, 00:24:15.060 "cntlid": 1, 00:24:15.060 "firmware_revision": "25.01", 00:24:15.060 "model_number": "SPDK bdev Controller", 00:24:15.060 "multi_ctrlr": true, 00:24:15.060 "oacs": { 00:24:15.060 "firmware": 0, 00:24:15.060 "format": 0, 00:24:15.060 "ns_manage": 0, 00:24:15.060 "security": 0 00:24:15.060 }, 00:24:15.060 "serial_number": "SPDK0", 00:24:15.060 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:15.060 "vendor_id": "0x8086" 00:24:15.060 }, 00:24:15.060 "ns_data": { 00:24:15.060 "can_share": true, 00:24:15.060 "id": 1 00:24:15.060 }, 00:24:15.060 "trid": { 00:24:15.060 "adrfam": "IPv4", 00:24:15.060 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:15.060 "traddr": "10.0.0.3", 00:24:15.060 "trsvcid": "4420", 00:24:15.060 "trtype": "TCP" 00:24:15.060 }, 00:24:15.060 "vs": { 00:24:15.060 "nvme_version": "1.3" 00:24:15.060 } 00:24:15.060 } 00:24:15.060 ] 00:24:15.060 }, 00:24:15.060 "memory_domains": [ 00:24:15.060 { 00:24:15.060 "dma_device_id": "system", 00:24:15.060 "dma_device_type": 1 00:24:15.060 } 00:24:15.060 ], 00:24:15.060 "name": "Nvme0n1", 00:24:15.060 "num_blocks": 38912, 00:24:15.060 "numa_id": -1, 00:24:15.060 "product_name": "NVMe disk", 00:24:15.060 "supported_io_types": { 00:24:15.060 "abort": true, 00:24:15.060 "compare": true, 00:24:15.060 "compare_and_write": true, 00:24:15.060 "copy": true, 00:24:15.060 "flush": true, 00:24:15.060 "get_zone_info": false, 00:24:15.060 "nvme_admin": true, 00:24:15.060 "nvme_io": true, 00:24:15.060 "nvme_io_md": false, 00:24:15.060 "nvme_iov_md": false, 00:24:15.060 "read": true, 00:24:15.060 "reset": true, 00:24:15.060 "seek_data": false, 00:24:15.060 "seek_hole": false, 00:24:15.060 "unmap": true, 00:24:15.060 "write": true, 00:24:15.060 "write_zeroes": true, 00:24:15.060 "zcopy": false, 00:24:15.060 "zone_append": false, 00:24:15.060 "zone_management": false 00:24:15.060 }, 00:24:15.060 "uuid": "fbf50a79-e742-412d-ac0d-23f8f21e12da", 00:24:15.060 "zoned": false 00:24:15.060 } 00:24:15.060 ] 00:24:15.060 17:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:15.060 17:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=103003 00:24:15.060 17:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:24:15.318 Running I/O for 10 seconds... 00:24:16.263 Latency(us) 00:24:16.263 [2024-11-19T17:23:08.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.263 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:16.263 Nvme0n1 : 1.00 8538.00 33.35 0.00 0.00 0.00 0.00 0.00 00:24:16.263 [2024-11-19T17:23:08.796Z] =================================================================================================================== 00:24:16.263 [2024-11-19T17:23:08.796Z] Total : 8538.00 33.35 0.00 0.00 0.00 0.00 0.00 00:24:16.263 00:24:17.196 17:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 73cefc2c-9efc-4e85-b49a-e008d2dbf21a 00:24:17.196 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:17.196 Nvme0n1 : 2.00 8994.50 35.13 0.00 0.00 0.00 0.00 0.00 00:24:17.196 [2024-11-19T17:23:09.729Z] =================================================================================================================== 00:24:17.196 [2024-11-19T17:23:09.729Z] Total : 8994.50 35.13 0.00 0.00 0.00 0.00 0.00 00:24:17.196 00:24:17.453 true 00:24:17.453 17:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73cefc2c-9efc-4e85-b49a-e008d2dbf21a 00:24:17.453 17:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:24:18.019 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:24:18.019 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:24:18.019 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 103003 00:24:18.277 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:18.277 Nvme0n1 : 3.00 9070.67 35.43 0.00 0.00 0.00 0.00 0.00 00:24:18.277 [2024-11-19T17:23:10.810Z] =================================================================================================================== 00:24:18.277 [2024-11-19T17:23:10.810Z] Total : 9070.67 35.43 0.00 0.00 0.00 0.00 0.00 00:24:18.277 00:24:19.209 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:19.209 Nvme0n1 : 4.00 9139.50 35.70 0.00 0.00 0.00 0.00 0.00 00:24:19.209 [2024-11-19T17:23:11.742Z] =================================================================================================================== 00:24:19.209 [2024-11-19T17:23:11.742Z] Total : 9139.50 35.70 0.00 0.00 0.00 0.00 0.00 00:24:19.209 00:24:20.143 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:20.143 Nvme0n1 : 5.00 9229.60 36.05 0.00 0.00 0.00 0.00 0.00 00:24:20.143 [2024-11-19T17:23:12.676Z] =================================================================================================================== 00:24:20.143 [2024-11-19T17:23:12.676Z] Total : 9229.60 36.05 0.00 0.00 0.00 0.00 0.00 00:24:20.143 00:24:21.516 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:21.516 Nvme0n1 : 6.00 9086.67 35.49 0.00 0.00 0.00 0.00 0.00 00:24:21.516 [2024-11-19T17:23:14.049Z] =================================================================================================================== 00:24:21.516 [2024-11-19T17:23:14.049Z] Total : 9086.67 35.49 0.00 0.00 0.00 0.00 0.00 00:24:21.516 00:24:22.450 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:22.451 Nvme0n1 : 7.00 9126.00 35.65 0.00 0.00 0.00 0.00 0.00 00:24:22.451 [2024-11-19T17:23:14.984Z] =================================================================================================================== 00:24:22.451 [2024-11-19T17:23:14.984Z] Total : 9126.00 35.65 0.00 0.00 0.00 0.00 0.00 00:24:22.451 00:24:23.385 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:23.385 Nvme0n1 : 8.00 9114.00 35.60 0.00 0.00 0.00 0.00 0.00 00:24:23.385 [2024-11-19T17:23:15.918Z] =================================================================================================================== 00:24:23.385 [2024-11-19T17:23:15.918Z] Total : 9114.00 35.60 0.00 0.00 0.00 0.00 0.00 00:24:23.385 00:24:24.320 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:24.320 Nvme0n1 : 9.00 9159.56 35.78 0.00 0.00 0.00 0.00 0.00 00:24:24.320 [2024-11-19T17:23:16.853Z] =================================================================================================================== 00:24:24.320 [2024-11-19T17:23:16.853Z] Total : 9159.56 35.78 0.00 0.00 0.00 0.00 0.00 00:24:24.320 00:24:25.256 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:25.256 Nvme0n1 : 10.00 9195.90 35.92 0.00 0.00 0.00 0.00 0.00 00:24:25.256 [2024-11-19T17:23:17.789Z] =================================================================================================================== 00:24:25.256 [2024-11-19T17:23:17.789Z] Total : 9195.90 35.92 0.00 0.00 0.00 0.00 0.00 00:24:25.256 00:24:25.256 00:24:25.256 Latency(us) 00:24:25.256 [2024-11-19T17:23:17.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.256 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:25.256 Nvme0n1 : 10.01 9202.56 35.95 0.00 0.00 13903.90 5213.09 145847.39 00:24:25.256 [2024-11-19T17:23:17.789Z] =================================================================================================================== 00:24:25.256 [2024-11-19T17:23:17.789Z] Total : 9202.56 35.95 0.00 0.00 13903.90 5213.09 145847.39 00:24:25.256 { 00:24:25.256 "results": [ 00:24:25.256 { 00:24:25.256 "job": "Nvme0n1", 00:24:25.256 "core_mask": "0x2", 00:24:25.256 "workload": "randwrite", 00:24:25.256 "status": "finished", 00:24:25.256 "queue_depth": 128, 00:24:25.256 "io_size": 4096, 00:24:25.256 "runtime": 10.006674, 00:24:25.256 "iops": 9202.558212648879, 00:24:25.256 "mibps": 35.94749301815968, 00:24:25.256 "io_failed": 0, 00:24:25.256 "io_timeout": 0, 00:24:25.256 "avg_latency_us": 13903.902826437845, 00:24:25.256 "min_latency_us": 5213.090909090909, 00:24:25.256 "max_latency_us": 145847.3890909091 00:24:25.256 } 00:24:25.256 ], 00:24:25.256 "core_count": 1 00:24:25.256 } 00:24:25.256 17:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 102957 00:24:25.256 17:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 102957 ']' 00:24:25.256 17:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 102957 00:24:25.256 17:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:24:25.256 17:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:25.256 17:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102957 00:24:25.256 killing process with pid 102957 00:24:25.256 Received shutdown signal, test time was about 10.000000 seconds 00:24:25.256 00:24:25.256 Latency(us) 00:24:25.256 [2024-11-19T17:23:17.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.256 [2024-11-19T17:23:17.789Z] =================================================================================================================== 00:24:25.256 [2024-11-19T17:23:17.789Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:25.256 17:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:25.256 17:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:25.256 17:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102957' 00:24:25.256 17:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 102957 00:24:25.256 17:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 102957 00:24:25.514 17:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:24:25.773 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:26.031 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:24:26.031 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73cefc2c-9efc-4e85-b49a-e008d2dbf21a 00:24:26.290 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:24:26.290 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:24:26.290 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 102392 00:24:26.290 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 102392 00:24:26.549 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 102392 Killed "${NVMF_APP[@]}" "$@" 00:24:26.549 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:24:26.549 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:24:26.549 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:26.549 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:26.549 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:24:26.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.549 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=103162 00:24:26.549 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:24:26.549 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 103162 00:24:26.549 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 103162 ']' 00:24:26.549 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.549 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:26.549 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.549 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:26.549 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:24:26.549 [2024-11-19 17:23:18.910847] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:26.549 [2024-11-19 17:23:18.912073] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:24:26.549 [2024-11-19 17:23:18.912158] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.549 [2024-11-19 17:23:19.065066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.808 [2024-11-19 17:23:19.126948] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.808 [2024-11-19 17:23:19.127017] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.808 [2024-11-19 17:23:19.127031] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.808 [2024-11-19 17:23:19.127043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.808 [2024-11-19 17:23:19.127053] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.808 [2024-11-19 17:23:19.127522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.808 [2024-11-19 17:23:19.252434] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:26.808 [2024-11-19 17:23:19.252762] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:26.808 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.808 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:24:26.808 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:26.808 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:26.808 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:24:26.808 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:26.808 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:24:27.067 [2024-11-19 17:23:19.538092] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:27.067 [2024-11-19 17:23:19.538808] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:27.067 [2024-11-19 17:23:19.539356] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:27.067 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:24:27.067 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev fbf50a79-e742-412d-ac0d-23f8f21e12da 00:24:27.067 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=fbf50a79-e742-412d-ac0d-23f8f21e12da 00:24:27.067 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:27.067 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:24:27.067 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:27.067 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:27.067 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:27.634 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fbf50a79-e742-412d-ac0d-23f8f21e12da -t 2000 00:24:27.634 [ 00:24:27.634 { 00:24:27.634 "aliases": [ 00:24:27.634 "lvs/lvol" 00:24:27.634 ], 00:24:27.634 "assigned_rate_limits": { 00:24:27.634 "r_mbytes_per_sec": 0, 00:24:27.634 "rw_ios_per_sec": 0, 00:24:27.634 "rw_mbytes_per_sec": 0, 00:24:27.634 "w_mbytes_per_sec": 0 00:24:27.634 }, 00:24:27.634 "block_size": 4096, 00:24:27.634 "claimed": false, 00:24:27.634 "driver_specific": { 00:24:27.634 "lvol": { 00:24:27.634 "base_bdev": "aio_bdev", 00:24:27.634 "clone": false, 00:24:27.634 "esnap_clone": false, 00:24:27.634 "lvol_store_uuid": "73cefc2c-9efc-4e85-b49a-e008d2dbf21a", 00:24:27.634 "num_allocated_clusters": 38, 00:24:27.634 "snapshot": false, 00:24:27.634 "thin_provision": false 00:24:27.634 } 00:24:27.634 }, 00:24:27.634 "name": "fbf50a79-e742-412d-ac0d-23f8f21e12da", 00:24:27.634 "num_blocks": 38912, 00:24:27.634 "product_name": "Logical Volume", 00:24:27.634 "supported_io_types": { 00:24:27.634 "abort": false, 00:24:27.634 "compare": false, 00:24:27.634 "compare_and_write": false, 00:24:27.634 "copy": false, 00:24:27.634 "flush": false, 00:24:27.634 "get_zone_info": false, 00:24:27.634 "nvme_admin": false, 00:24:27.634 "nvme_io": false, 00:24:27.634 "nvme_io_md": false, 00:24:27.634 "nvme_iov_md": false, 00:24:27.634 "read": true, 00:24:27.634 "reset": true, 00:24:27.634 "seek_data": true, 00:24:27.634 "seek_hole": true, 00:24:27.634 "unmap": true, 00:24:27.634 "write": true, 00:24:27.634 "write_zeroes": true, 00:24:27.634 "zcopy": false, 00:24:27.634 "zone_append": false, 00:24:27.634 "zone_management": false 00:24:27.634 }, 00:24:27.634 "uuid": "fbf50a79-e742-412d-ac0d-23f8f21e12da", 00:24:27.634 "zoned": false 00:24:27.634 } 00:24:27.634 ] 00:24:27.634 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:24:27.634 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73cefc2c-9efc-4e85-b49a-e008d2dbf21a 00:24:27.634 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:24:27.893 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:24:27.893 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73cefc2c-9efc-4e85-b49a-e008d2dbf21a 00:24:27.893 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:24:28.152 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:24:28.152 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:24:28.411 [2024-11-19 17:23:20.844353] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:24:28.411 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73cefc2c-9efc-4e85-b49a-e008d2dbf21a 00:24:28.411 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:24:28.411 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73cefc2c-9efc-4e85-b49a-e008d2dbf21a 00:24:28.411 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:28.411 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:28.411 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:28.411 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:28.411 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:28.411 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:28.411 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:28.412 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:28.412 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73cefc2c-9efc-4e85-b49a-e008d2dbf21a 00:24:28.671 2024/11/19 17:23:21 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:73cefc2c-9efc-4e85-b49a-e008d2dbf21a], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:24:28.671 request: 00:24:28.671 { 00:24:28.671 "method": "bdev_lvol_get_lvstores", 00:24:28.671 "params": { 00:24:28.671 "uuid": "73cefc2c-9efc-4e85-b49a-e008d2dbf21a" 00:24:28.671 } 00:24:28.671 } 00:24:28.671 Got JSON-RPC error response 00:24:28.671 GoRPCClient: error on JSON-RPC call 00:24:28.671 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:24:28.671 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:28.671 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:28.671 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:28.671 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:24:28.929 aio_bdev 00:24:28.929 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fbf50a79-e742-412d-ac0d-23f8f21e12da 00:24:28.929 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=fbf50a79-e742-412d-ac0d-23f8f21e12da 00:24:28.929 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:28.929 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:24:28.929 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:28.929 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:28.929 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:29.188 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fbf50a79-e742-412d-ac0d-23f8f21e12da -t 2000 00:24:29.446 [ 00:24:29.446 { 00:24:29.446 "aliases": [ 00:24:29.446 "lvs/lvol" 00:24:29.446 ], 00:24:29.446 "assigned_rate_limits": { 00:24:29.446 "r_mbytes_per_sec": 0, 00:24:29.446 "rw_ios_per_sec": 0, 00:24:29.446 "rw_mbytes_per_sec": 0, 00:24:29.446 "w_mbytes_per_sec": 0 00:24:29.446 }, 00:24:29.446 "block_size": 4096, 00:24:29.446 "claimed": false, 00:24:29.446 "driver_specific": { 00:24:29.446 "lvol": { 00:24:29.446 "base_bdev": "aio_bdev", 00:24:29.446 "clone": false, 00:24:29.446 "esnap_clone": false, 00:24:29.446 "lvol_store_uuid": "73cefc2c-9efc-4e85-b49a-e008d2dbf21a", 00:24:29.446 "num_allocated_clusters": 38, 00:24:29.446 "snapshot": false, 00:24:29.446 "thin_provision": false 00:24:29.446 } 00:24:29.446 }, 00:24:29.446 "name": "fbf50a79-e742-412d-ac0d-23f8f21e12da", 00:24:29.446 "num_blocks": 38912, 00:24:29.446 "product_name": "Logical Volume", 00:24:29.446 "supported_io_types": { 00:24:29.446 "abort": false, 00:24:29.446 "compare": false, 00:24:29.446 "compare_and_write": false, 00:24:29.446 "copy": false, 00:24:29.446 "flush": false, 00:24:29.446 "get_zone_info": false, 00:24:29.446 "nvme_admin": false, 00:24:29.446 "nvme_io": false, 00:24:29.446 "nvme_io_md": false, 00:24:29.446 "nvme_iov_md": false, 00:24:29.446 "read": true, 00:24:29.446 "reset": true, 00:24:29.446 "seek_data": true, 00:24:29.446 "seek_hole": true, 00:24:29.446 "unmap": true, 00:24:29.446 "write": true, 00:24:29.446 "write_zeroes": true, 00:24:29.446 "zcopy": false, 00:24:29.446 "zone_append": false, 00:24:29.446 "zone_management": false 00:24:29.446 }, 00:24:29.446 "uuid": "fbf50a79-e742-412d-ac0d-23f8f21e12da", 00:24:29.446 "zoned": false 00:24:29.446 } 00:24:29.446 ] 00:24:29.446 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:24:29.446 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73cefc2c-9efc-4e85-b49a-e008d2dbf21a 00:24:29.446 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:24:29.705 17:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:24:29.705 17:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73cefc2c-9efc-4e85-b49a-e008d2dbf21a 00:24:29.705 17:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:24:29.964 17:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:24:29.964 17:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete fbf50a79-e742-412d-ac0d-23f8f21e12da 00:24:30.223 17:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 73cefc2c-9efc-4e85-b49a-e008d2dbf21a 00:24:30.481 17:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:24:30.740 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:30.999 00:24:30.999 real 0m19.843s 00:24:30.999 user 0m27.600s 00:24:30.999 sys 0m8.047s 00:24:30.999 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:30.999 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:24:30.999 ************************************ 00:24:30.999 END TEST lvs_grow_dirty 00:24:30.999 ************************************ 00:24:30.999 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:24:30.999 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:24:30.999 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:24:30.999 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:30.999 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:30.999 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:30.999 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:30.999 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:30.999 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:30.999 nvmf_trace.0 00:24:31.258 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:24:31.258 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:24:31.258 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:31.258 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:24:31.516 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:31.516 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:24:31.516 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:31.516 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:31.516 rmmod nvme_tcp 00:24:31.516 rmmod nvme_fabrics 00:24:31.516 rmmod nvme_keyring 00:24:31.775 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:31.775 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:24:31.775 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:24:31.775 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 103162 ']' 00:24:31.775 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 103162 00:24:31.775 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 103162 ']' 00:24:31.775 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 103162 00:24:31.775 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:24:31.775 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:31.775 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103162 00:24:31.775 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:31.775 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:31.775 killing process with pid 103162 00:24:31.775 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103162' 00:24:31.775 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 103162 00:24:31.775 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 103162 00:24:32.034 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:32.034 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:32.034 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:32.034 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:24:32.034 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:24:32.034 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:32.034 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:24:32.034 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:32.034 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:32.034 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:32.034 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:32.034 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:32.034 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:32.034 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:32.034 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:32.034 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:32.034 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:32.034 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:32.034 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:32.034 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:32.293 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:32.293 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:32.293 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:32.293 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.293 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:32.293 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.293 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:24:32.293 00:24:32.293 real 0m40.263s 00:24:32.293 user 0m45.672s 00:24:32.293 sys 0m11.559s 00:24:32.293 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:32.293 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:32.293 ************************************ 00:24:32.293 END TEST nvmf_lvs_grow 00:24:32.293 ************************************ 00:24:32.293 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:24:32.293 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:32.293 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:32.293 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:24:32.293 ************************************ 00:24:32.293 START TEST nvmf_bdev_io_wait 00:24:32.293 ************************************ 00:24:32.293 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:24:32.293 * Looking for test storage... 00:24:32.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:32.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.553 --rc genhtml_branch_coverage=1 00:24:32.553 --rc genhtml_function_coverage=1 00:24:32.553 --rc genhtml_legend=1 00:24:32.553 --rc geninfo_all_blocks=1 00:24:32.553 --rc geninfo_unexecuted_blocks=1 00:24:32.553 00:24:32.553 ' 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:32.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.553 --rc genhtml_branch_coverage=1 00:24:32.553 --rc genhtml_function_coverage=1 00:24:32.553 --rc genhtml_legend=1 00:24:32.553 --rc geninfo_all_blocks=1 00:24:32.553 --rc geninfo_unexecuted_blocks=1 00:24:32.553 00:24:32.553 ' 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:32.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.553 --rc genhtml_branch_coverage=1 00:24:32.553 --rc genhtml_function_coverage=1 00:24:32.553 --rc genhtml_legend=1 00:24:32.553 --rc geninfo_all_blocks=1 00:24:32.553 --rc geninfo_unexecuted_blocks=1 00:24:32.553 00:24:32.553 ' 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:32.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.553 --rc genhtml_branch_coverage=1 00:24:32.553 --rc genhtml_function_coverage=1 00:24:32.553 --rc genhtml_legend=1 00:24:32.553 --rc geninfo_all_blocks=1 00:24:32.553 --rc geninfo_unexecuted_blocks=1 00:24:32.553 00:24:32.553 ' 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.553 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:32.554 Cannot find device "nvmf_init_br" 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:24:32.554 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:32.554 Cannot find device "nvmf_init_br2" 00:24:32.554 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:24:32.554 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:32.554 Cannot find device "nvmf_tgt_br" 00:24:32.554 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:24:32.554 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:32.554 Cannot find device "nvmf_tgt_br2" 00:24:32.554 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:24:32.554 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:32.554 Cannot find device "nvmf_init_br" 00:24:32.554 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:24:32.554 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:32.554 Cannot find device "nvmf_init_br2" 00:24:32.554 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:24:32.554 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:32.554 Cannot find device "nvmf_tgt_br" 00:24:32.554 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:24:32.554 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:32.554 Cannot find device "nvmf_tgt_br2" 00:24:32.554 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:24:32.554 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:32.554 Cannot find device "nvmf_br" 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:32.814 Cannot find device "nvmf_init_if" 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:32.814 Cannot find device "nvmf_init_if2" 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:32.814 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:32.814 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:32.814 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:32.814 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:24:32.814 00:24:32.814 --- 10.0.0.3 ping statistics --- 00:24:32.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.814 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:32.814 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:32.814 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:24:32.814 00:24:32.814 --- 10.0.0.4 ping statistics --- 00:24:32.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.814 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:32.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:24:32.814 00:24:32.814 --- 10.0.0.1 ping statistics --- 00:24:32.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.814 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:32.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:24:32.814 00:24:32.814 --- 10.0.0.2 ping statistics --- 00:24:32.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.814 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:32.814 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:33.073 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:33.073 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:33.073 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:33.073 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:33.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.073 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=103628 00:24:33.073 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 103628 00:24:33.073 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 103628 ']' 00:24:33.073 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.073 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:24:33.073 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:33.073 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.073 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:33.073 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:33.073 [2024-11-19 17:23:25.432834] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:33.073 [2024-11-19 17:23:25.434091] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:24:33.073 [2024-11-19 17:23:25.434161] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.073 [2024-11-19 17:23:25.591510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:33.332 [2024-11-19 17:23:25.655842] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.332 [2024-11-19 17:23:25.656264] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.332 [2024-11-19 17:23:25.656446] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:33.332 [2024-11-19 17:23:25.656603] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:33.332 [2024-11-19 17:23:25.656656] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.332 [2024-11-19 17:23:25.658391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.332 [2024-11-19 17:23:25.658488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:33.332 [2024-11-19 17:23:25.658537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:33.332 [2024-11-19 17:23:25.658543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.332 [2024-11-19 17:23:25.660303] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:33.332 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:33.332 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:24:33.332 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:33.332 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:33.332 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:33.332 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:33.332 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:24:33.332 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.332 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:33.332 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.332 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:24:33.332 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.332 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:33.592 [2024-11-19 17:23:25.863970] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:33.592 [2024-11-19 17:23:25.864267] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:24:33.592 [2024-11-19 17:23:25.865341] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:24:33.592 [2024-11-19 17:23:25.866382] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:33.592 [2024-11-19 17:23:25.876663] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:33.592 Malloc0 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:33.592 [2024-11-19 17:23:25.948894] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=103662 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=103664 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:33.592 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:33.592 { 00:24:33.592 "params": { 00:24:33.592 "name": "Nvme$subsystem", 00:24:33.592 "trtype": "$TEST_TRANSPORT", 00:24:33.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:33.592 "adrfam": "ipv4", 00:24:33.592 "trsvcid": "$NVMF_PORT", 00:24:33.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:33.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:33.592 "hdgst": ${hdgst:-false}, 00:24:33.592 "ddgst": ${ddgst:-false} 00:24:33.593 }, 00:24:33.593 "method": "bdev_nvme_attach_controller" 00:24:33.593 } 00:24:33.593 EOF 00:24:33.593 )") 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=103666 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=103669 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:33.593 { 00:24:33.593 "params": { 00:24:33.593 "name": "Nvme$subsystem", 00:24:33.593 "trtype": "$TEST_TRANSPORT", 00:24:33.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:33.593 "adrfam": "ipv4", 00:24:33.593 "trsvcid": "$NVMF_PORT", 00:24:33.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:33.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:33.593 "hdgst": ${hdgst:-false}, 00:24:33.593 "ddgst": ${ddgst:-false} 00:24:33.593 }, 00:24:33.593 "method": "bdev_nvme_attach_controller" 00:24:33.593 } 00:24:33.593 EOF 00:24:33.593 )") 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:33.593 { 00:24:33.593 "params": { 00:24:33.593 "name": "Nvme$subsystem", 00:24:33.593 "trtype": "$TEST_TRANSPORT", 00:24:33.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:33.593 "adrfam": "ipv4", 00:24:33.593 "trsvcid": "$NVMF_PORT", 00:24:33.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:33.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:33.593 "hdgst": ${hdgst:-false}, 00:24:33.593 "ddgst": ${ddgst:-false} 00:24:33.593 }, 00:24:33.593 "method": "bdev_nvme_attach_controller" 00:24:33.593 } 00:24:33.593 EOF 00:24:33.593 )") 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:33.593 "params": { 00:24:33.593 "name": "Nvme1", 00:24:33.593 "trtype": "tcp", 00:24:33.593 "traddr": "10.0.0.3", 00:24:33.593 "adrfam": "ipv4", 00:24:33.593 "trsvcid": "4420", 00:24:33.593 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:33.593 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:33.593 "hdgst": false, 00:24:33.593 "ddgst": false 00:24:33.593 }, 00:24:33.593 "method": "bdev_nvme_attach_controller" 00:24:33.593 }' 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:33.593 "params": { 00:24:33.593 "name": "Nvme1", 00:24:33.593 "trtype": "tcp", 00:24:33.593 "traddr": "10.0.0.3", 00:24:33.593 "adrfam": "ipv4", 00:24:33.593 "trsvcid": "4420", 00:24:33.593 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:33.593 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:33.593 "hdgst": false, 00:24:33.593 "ddgst": false 00:24:33.593 }, 00:24:33.593 "method": "bdev_nvme_attach_controller" 00:24:33.593 }' 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:33.593 { 00:24:33.593 "params": { 00:24:33.593 "name": "Nvme$subsystem", 00:24:33.593 "trtype": "$TEST_TRANSPORT", 00:24:33.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:33.593 "adrfam": "ipv4", 00:24:33.593 "trsvcid": "$NVMF_PORT", 00:24:33.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:33.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:33.593 "hdgst": ${hdgst:-false}, 00:24:33.593 "ddgst": ${ddgst:-false} 00:24:33.593 }, 00:24:33.593 "method": "bdev_nvme_attach_controller" 00:24:33.593 } 00:24:33.593 EOF 00:24:33.593 )") 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:33.593 "params": { 00:24:33.593 "name": "Nvme1", 00:24:33.593 "trtype": "tcp", 00:24:33.593 "traddr": "10.0.0.3", 00:24:33.593 "adrfam": "ipv4", 00:24:33.593 "trsvcid": "4420", 00:24:33.593 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:33.593 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:33.593 "hdgst": false, 00:24:33.593 "ddgst": false 00:24:33.593 }, 00:24:33.593 "method": "bdev_nvme_attach_controller" 00:24:33.593 }' 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:24:33.593 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:33.593 "params": { 00:24:33.593 "name": "Nvme1", 00:24:33.593 "trtype": "tcp", 00:24:33.593 "traddr": "10.0.0.3", 00:24:33.593 "adrfam": "ipv4", 00:24:33.593 "trsvcid": "4420", 00:24:33.593 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:33.593 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:33.593 "hdgst": false, 00:24:33.593 "ddgst": false 00:24:33.593 }, 00:24:33.593 "method": "bdev_nvme_attach_controller" 00:24:33.593 }' 00:24:33.593 [2024-11-19 17:23:26.017591] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:24:33.593 [2024-11-19 17:23:26.018340] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:33.593 [2024-11-19 17:23:26.020701] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:24:33.593 [2024-11-19 17:23:26.020807] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:24:33.593 [2024-11-19 17:23:26.039508] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:24:33.593 [2024-11-19 17:23:26.039593] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:24:33.593 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 103662 00:24:33.593 [2024-11-19 17:23:26.056402] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:24:33.593 [2024-11-19 17:23:26.056639] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:24:33.853 [2024-11-19 17:23:26.263057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.853 [2024-11-19 17:23:26.324613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:33.853 [2024-11-19 17:23:26.370853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.112 [2024-11-19 17:23:26.431676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:24:34.112 [2024-11-19 17:23:26.447333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.112 [2024-11-19 17:23:26.506106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:24:34.112 Running I/O for 1 seconds... 00:24:34.112 [2024-11-19 17:23:26.531811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.112 Running I/O for 1 seconds... 00:24:34.112 [2024-11-19 17:23:26.591447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:24:34.371 Running I/O for 1 seconds... 00:24:34.371 Running I/O for 1 seconds... 00:24:35.307 5558.00 IOPS, 21.71 MiB/s 00:24:35.307 Latency(us) 00:24:35.307 [2024-11-19T17:23:27.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.307 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:24:35.307 Nvme1n1 : 1.02 5581.65 21.80 0.00 0.00 22740.92 4408.79 38368.35 00:24:35.307 [2024-11-19T17:23:27.840Z] =================================================================================================================== 00:24:35.307 [2024-11-19T17:23:27.840Z] Total : 5581.65 21.80 0.00 0.00 22740.92 4408.79 38368.35 00:24:35.307 6330.00 IOPS, 24.73 MiB/s 00:24:35.307 Latency(us) 00:24:35.307 [2024-11-19T17:23:27.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.307 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:24:35.307 Nvme1n1 : 1.01 6373.04 24.89 0.00 0.00 19944.19 7000.44 27167.65 00:24:35.307 [2024-11-19T17:23:27.840Z] =================================================================================================================== 00:24:35.307 [2024-11-19T17:23:27.840Z] Total : 6373.04 24.89 0.00 0.00 19944.19 7000.44 27167.65 00:24:35.307 212320.00 IOPS, 829.38 MiB/s 00:24:35.307 Latency(us) 00:24:35.307 [2024-11-19T17:23:27.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.307 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:24:35.307 Nvme1n1 : 1.00 211970.19 828.01 0.00 0.00 600.81 245.76 1630.95 00:24:35.308 [2024-11-19T17:23:27.841Z] =================================================================================================================== 00:24:35.308 [2024-11-19T17:23:27.841Z] Total : 211970.19 828.01 0.00 0.00 600.81 245.76 1630.95 00:24:35.308 6441.00 IOPS, 25.16 MiB/s 00:24:35.308 Latency(us) 00:24:35.308 [2024-11-19T17:23:27.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.308 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:24:35.308 Nvme1n1 : 1.01 6568.38 25.66 0.00 0.00 19435.45 3678.95 46232.67 00:24:35.308 [2024-11-19T17:23:27.841Z] =================================================================================================================== 00:24:35.308 [2024-11-19T17:23:27.841Z] Total : 6568.38 25.66 0.00 0.00 19435.45 3678.95 46232.67 00:24:35.308 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 103664 00:24:35.308 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 103666 00:24:35.308 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 103669 00:24:35.566 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:35.566 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.566 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:35.566 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.566 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:24:35.566 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:24:35.566 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:35.566 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:24:35.566 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:35.566 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:24:35.566 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:35.566 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:35.566 rmmod nvme_tcp 00:24:35.566 rmmod nvme_fabrics 00:24:35.566 rmmod nvme_keyring 00:24:35.566 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:35.566 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:24:35.566 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:24:35.566 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 103628 ']' 00:24:35.566 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 103628 00:24:35.566 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 103628 ']' 00:24:35.566 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 103628 00:24:35.566 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:24:35.566 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:35.566 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103628 00:24:35.566 killing process with pid 103628 00:24:35.566 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:35.567 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:35.567 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103628' 00:24:35.567 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 103628 00:24:35.567 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 103628 00:24:35.825 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:35.825 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:35.825 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:35.825 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:24:35.825 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:35.825 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:24:35.825 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:24:35.825 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:35.825 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:35.825 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:35.825 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:35.825 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:35.825 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:35.825 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:35.825 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:35.825 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:35.825 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:35.825 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:36.084 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:36.084 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:36.084 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:36.084 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:36.084 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:36.084 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.084 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:36.084 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.084 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:24:36.084 00:24:36.084 real 0m3.778s 00:24:36.084 user 0m13.857s 00:24:36.084 sys 0m2.260s 00:24:36.084 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:36.084 ************************************ 00:24:36.084 END TEST nvmf_bdev_io_wait 00:24:36.084 ************************************ 00:24:36.084 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:36.084 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:24:36.084 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:36.084 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:36.084 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:24:36.084 ************************************ 00:24:36.085 START TEST nvmf_queue_depth 00:24:36.085 ************************************ 00:24:36.085 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:24:36.344 * Looking for test storage... 00:24:36.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:36.344 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:36.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.345 --rc genhtml_branch_coverage=1 00:24:36.345 --rc genhtml_function_coverage=1 00:24:36.345 --rc genhtml_legend=1 00:24:36.345 --rc geninfo_all_blocks=1 00:24:36.345 --rc geninfo_unexecuted_blocks=1 00:24:36.345 00:24:36.345 ' 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:36.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.345 --rc genhtml_branch_coverage=1 00:24:36.345 --rc genhtml_function_coverage=1 00:24:36.345 --rc genhtml_legend=1 00:24:36.345 --rc geninfo_all_blocks=1 00:24:36.345 --rc geninfo_unexecuted_blocks=1 00:24:36.345 00:24:36.345 ' 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:36.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.345 --rc genhtml_branch_coverage=1 00:24:36.345 --rc genhtml_function_coverage=1 00:24:36.345 --rc genhtml_legend=1 00:24:36.345 --rc geninfo_all_blocks=1 00:24:36.345 --rc geninfo_unexecuted_blocks=1 00:24:36.345 00:24:36.345 ' 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:36.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.345 --rc genhtml_branch_coverage=1 00:24:36.345 --rc genhtml_function_coverage=1 00:24:36.345 --rc genhtml_legend=1 00:24:36.345 --rc geninfo_all_blocks=1 00:24:36.345 --rc geninfo_unexecuted_blocks=1 00:24:36.345 00:24:36.345 ' 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:36.345 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:36.346 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:36.346 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:36.346 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:36.346 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:36.346 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:36.346 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:36.346 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:36.346 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:36.346 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:36.346 Cannot find device "nvmf_init_br" 00:24:36.346 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:24:36.346 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:36.346 Cannot find device "nvmf_init_br2" 00:24:36.346 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:24:36.346 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:36.346 Cannot find device "nvmf_tgt_br" 00:24:36.346 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:24:36.346 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:36.346 Cannot find device "nvmf_tgt_br2" 00:24:36.346 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:24:36.346 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:36.346 Cannot find device "nvmf_init_br" 00:24:36.346 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:24:36.346 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:36.346 Cannot find device "nvmf_init_br2" 00:24:36.346 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:24:36.346 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:36.346 Cannot find device "nvmf_tgt_br" 00:24:36.346 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:24:36.346 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:36.346 Cannot find device "nvmf_tgt_br2" 00:24:36.346 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:24:36.346 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:36.612 Cannot find device "nvmf_br" 00:24:36.612 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:24:36.612 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:36.612 Cannot find device "nvmf_init_if" 00:24:36.612 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:24:36.612 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:36.612 Cannot find device "nvmf_init_if2" 00:24:36.612 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:24:36.612 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:36.612 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:36.612 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:24:36.612 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:36.612 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:36.612 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:24:36.612 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:36.612 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:36.612 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:36.612 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:36.612 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:36.612 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:36.612 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:36.612 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:36.612 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:36.612 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:36.612 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:36.612 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:36.612 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:36.612 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:36.612 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:36.612 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:36.612 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:36.612 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:36.612 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:36.612 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:36.612 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:36.612 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:36.612 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:36.612 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:36.612 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:36.902 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:36.902 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:24:36.902 00:24:36.902 --- 10.0.0.3 ping statistics --- 00:24:36.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.902 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:36.902 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:36.902 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:24:36.902 00:24:36.902 --- 10.0.0.4 ping statistics --- 00:24:36.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.902 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:36.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:36.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:24:36.902 00:24:36.902 --- 10.0.0.1 ping statistics --- 00:24:36.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.902 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:36.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:36.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:24:36.902 00:24:36.902 --- 10.0.0.2 ping statistics --- 00:24:36.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.902 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=103928 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 103928 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 103928 ']' 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:36.902 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:36.902 [2024-11-19 17:23:29.281600] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:36.902 [2024-11-19 17:23:29.282874] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:24:36.902 [2024-11-19 17:23:29.282944] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.174 [2024-11-19 17:23:29.439651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.174 [2024-11-19 17:23:29.501441] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.174 [2024-11-19 17:23:29.501528] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.174 [2024-11-19 17:23:29.501545] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.174 [2024-11-19 17:23:29.501557] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.174 [2024-11-19 17:23:29.501568] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.174 [2024-11-19 17:23:29.502099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.174 [2024-11-19 17:23:29.643374] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:37.174 [2024-11-19 17:23:29.643874] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:37.174 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:37.174 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:24:37.174 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:37.174 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:37.174 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:37.433 [2024-11-19 17:23:29.735191] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:37.433 Malloc0 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:37.433 [2024-11-19 17:23:29.807193] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=103965 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 103965 /var/tmp/bdevperf.sock 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 103965 ']' 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:37.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:37.433 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:37.433 [2024-11-19 17:23:29.876660] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:24:37.433 [2024-11-19 17:23:29.876781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103965 ] 00:24:37.691 [2024-11-19 17:23:30.029840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.691 [2024-11-19 17:23:30.110995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.627 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:38.627 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:24:38.627 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:38.627 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.627 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:38.627 NVMe0n1 00:24:38.627 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.627 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:38.627 Running I/O for 10 seconds... 00:24:40.937 9216.00 IOPS, 36.00 MiB/s [2024-11-19T17:23:34.405Z] 9733.50 IOPS, 38.02 MiB/s [2024-11-19T17:23:35.341Z] 10105.33 IOPS, 39.47 MiB/s [2024-11-19T17:23:36.277Z] 10281.75 IOPS, 40.16 MiB/s [2024-11-19T17:23:37.214Z] 10446.20 IOPS, 40.81 MiB/s [2024-11-19T17:23:38.150Z] 10570.17 IOPS, 41.29 MiB/s [2024-11-19T17:23:39.085Z] 10664.29 IOPS, 41.66 MiB/s [2024-11-19T17:23:40.463Z] 10740.88 IOPS, 41.96 MiB/s [2024-11-19T17:23:41.400Z] 10798.89 IOPS, 42.18 MiB/s [2024-11-19T17:23:41.400Z] 10869.40 IOPS, 42.46 MiB/s 00:24:48.867 Latency(us) 00:24:48.867 [2024-11-19T17:23:41.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.867 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:24:48.867 Verification LBA range: start 0x0 length 0x4000 00:24:48.867 NVMe0n1 : 10.07 10898.92 42.57 0.00 0.00 93615.00 21924.77 81979.58 00:24:48.867 [2024-11-19T17:23:41.400Z] =================================================================================================================== 00:24:48.867 [2024-11-19T17:23:41.400Z] Total : 10898.92 42.57 0.00 0.00 93615.00 21924.77 81979.58 00:24:48.867 { 00:24:48.867 "results": [ 00:24:48.867 { 00:24:48.867 "job": "NVMe0n1", 00:24:48.867 "core_mask": "0x1", 00:24:48.867 "workload": "verify", 00:24:48.867 "status": "finished", 00:24:48.867 "verify_range": { 00:24:48.867 "start": 0, 00:24:48.867 "length": 16384 00:24:48.867 }, 00:24:48.867 "queue_depth": 1024, 00:24:48.867 "io_size": 4096, 00:24:48.867 "runtime": 10.066867, 00:24:48.867 "iops": 10898.92217707853, 00:24:48.867 "mibps": 42.57391475421301, 00:24:48.867 "io_failed": 0, 00:24:48.867 "io_timeout": 0, 00:24:48.867 "avg_latency_us": 93615.00356146086, 00:24:48.867 "min_latency_us": 21924.77090909091, 00:24:48.867 "max_latency_us": 81979.57818181819 00:24:48.867 } 00:24:48.867 ], 00:24:48.867 "core_count": 1 00:24:48.867 } 00:24:48.867 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 103965 00:24:48.867 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 103965 ']' 00:24:48.867 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 103965 00:24:48.867 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:24:48.867 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:48.867 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103965 00:24:48.867 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:48.867 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:48.867 killing process with pid 103965 00:24:48.867 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103965' 00:24:48.867 Received shutdown signal, test time was about 10.000000 seconds 00:24:48.867 00:24:48.867 Latency(us) 00:24:48.867 [2024-11-19T17:23:41.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.867 [2024-11-19T17:23:41.400Z] =================================================================================================================== 00:24:48.867 [2024-11-19T17:23:41.400Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:48.867 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 103965 00:24:48.867 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 103965 00:24:49.126 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:24:49.126 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:24:49.126 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:49.126 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:24:49.126 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:49.126 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:24:49.126 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:49.126 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:49.126 rmmod nvme_tcp 00:24:49.126 rmmod nvme_fabrics 00:24:49.126 rmmod nvme_keyring 00:24:49.126 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:49.126 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:24:49.126 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:24:49.126 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 103928 ']' 00:24:49.126 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 103928 00:24:49.126 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 103928 ']' 00:24:49.126 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 103928 00:24:49.126 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:24:49.126 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:49.126 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103928 00:24:49.127 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:49.127 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:49.127 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103928' 00:24:49.127 killing process with pid 103928 00:24:49.127 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 103928 00:24:49.127 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 103928 00:24:49.386 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:49.386 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:49.386 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:49.386 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:24:49.386 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:49.386 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:24:49.386 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:24:49.386 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:49.386 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:49.386 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:49.386 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:49.386 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:49.386 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:49.386 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:49.386 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:49.386 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:49.386 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:49.386 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:49.386 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:49.386 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:49.645 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:49.645 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:49.645 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:49.645 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.645 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:49.645 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.645 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:24:49.645 00:24:49.645 real 0m13.429s 00:24:49.645 user 0m22.085s 00:24:49.645 sys 0m2.884s 00:24:49.645 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:49.645 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:49.645 ************************************ 00:24:49.645 END TEST nvmf_queue_depth 00:24:49.645 ************************************ 00:24:49.645 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:24:49.645 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:49.645 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:49.645 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:24:49.645 ************************************ 00:24:49.645 START TEST nvmf_target_multipath 00:24:49.645 ************************************ 00:24:49.645 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:24:49.645 * Looking for test storage... 00:24:49.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:49.645 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:49.645 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:24:49.645 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:49.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.906 --rc genhtml_branch_coverage=1 00:24:49.906 --rc genhtml_function_coverage=1 00:24:49.906 --rc genhtml_legend=1 00:24:49.906 --rc geninfo_all_blocks=1 00:24:49.906 --rc geninfo_unexecuted_blocks=1 00:24:49.906 00:24:49.906 ' 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:49.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.906 --rc genhtml_branch_coverage=1 00:24:49.906 --rc genhtml_function_coverage=1 00:24:49.906 --rc genhtml_legend=1 00:24:49.906 --rc geninfo_all_blocks=1 00:24:49.906 --rc geninfo_unexecuted_blocks=1 00:24:49.906 00:24:49.906 ' 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:49.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.906 --rc genhtml_branch_coverage=1 00:24:49.906 --rc genhtml_function_coverage=1 00:24:49.906 --rc genhtml_legend=1 00:24:49.906 --rc geninfo_all_blocks=1 00:24:49.906 --rc geninfo_unexecuted_blocks=1 00:24:49.906 00:24:49.906 ' 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:49.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.906 --rc genhtml_branch_coverage=1 00:24:49.906 --rc genhtml_function_coverage=1 00:24:49.906 --rc genhtml_legend=1 00:24:49.906 --rc geninfo_all_blocks=1 00:24:49.906 --rc geninfo_unexecuted_blocks=1 00:24:49.906 00:24:49.906 ' 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.906 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:49.907 Cannot find device "nvmf_init_br" 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:49.907 Cannot find device "nvmf_init_br2" 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:49.907 Cannot find device "nvmf_tgt_br" 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:49.907 Cannot find device "nvmf_tgt_br2" 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:49.907 Cannot find device "nvmf_init_br" 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:49.907 Cannot find device "nvmf_init_br2" 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:49.907 Cannot find device "nvmf_tgt_br" 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:49.907 Cannot find device "nvmf_tgt_br2" 00:24:49.907 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:24:49.908 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:49.908 Cannot find device "nvmf_br" 00:24:49.908 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:24:49.908 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:49.908 Cannot find device "nvmf_init_if" 00:24:49.908 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:24:49.908 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:49.908 Cannot find device "nvmf_init_if2" 00:24:49.908 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:24:49.908 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:49.908 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:49.908 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:24:49.908 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:49.908 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:49.908 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:24:49.908 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:49.908 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:49.908 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:49.908 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:49.908 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:49.908 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:50.167 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:50.167 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:24:50.167 00:24:50.167 --- 10.0.0.3 ping statistics --- 00:24:50.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.167 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:50.167 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:50.167 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:24:50.167 00:24:50.167 --- 10.0.0.4 ping statistics --- 00:24:50.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.167 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:50.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:50.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:24:50.167 00:24:50.167 --- 10.0.0.1 ping statistics --- 00:24:50.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.167 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:50.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:50.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:24:50.167 00:24:50.167 --- 10.0.0.2 ping statistics --- 00:24:50.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.167 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:50.167 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:50.426 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:24:50.426 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:24:50.426 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:24:50.426 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:50.426 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:50.426 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:50.426 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=104356 00:24:50.426 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:24:50.426 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 104356 00:24:50.426 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 104356 ']' 00:24:50.426 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.426 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:50.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.426 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.426 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:50.426 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:50.426 [2024-11-19 17:23:42.766065] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:50.426 [2024-11-19 17:23:42.767318] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:24:50.426 [2024-11-19 17:23:42.767385] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:50.426 [2024-11-19 17:23:42.913413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:50.685 [2024-11-19 17:23:42.965856] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:50.685 [2024-11-19 17:23:42.965928] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:50.685 [2024-11-19 17:23:42.965939] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:50.685 [2024-11-19 17:23:42.965946] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:50.685 [2024-11-19 17:23:42.965952] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:50.685 [2024-11-19 17:23:42.967263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:50.685 [2024-11-19 17:23:42.967413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:50.685 [2024-11-19 17:23:42.967546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:50.685 [2024-11-19 17:23:42.967547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.685 [2024-11-19 17:23:43.084159] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:50.685 [2024-11-19 17:23:43.084642] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:24:50.685 [2024-11-19 17:23:43.084795] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:50.685 [2024-11-19 17:23:43.085052] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:24:50.685 [2024-11-19 17:23:43.085582] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:24:50.685 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:50.685 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:24:50.685 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:50.685 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:50.685 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:50.685 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:50.685 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:50.944 [2024-11-19 17:23:43.428599] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:50.944 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:51.511 Malloc0 00:24:51.511 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:24:51.511 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:52.077 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:52.077 [2024-11-19 17:23:44.592653] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:52.335 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:24:52.594 [2024-11-19 17:23:44.876527] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:24:52.594 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:24:52.594 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:24:52.852 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:24:52.853 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:24:52.853 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:52.853 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:52.853 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:24:54.756 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:54.756 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:54.756 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:24:54.756 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:54.756 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:54.756 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:24:54.756 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:24:54.756 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:24:54.756 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:24:54.756 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:24:54.756 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:24:54.756 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:24:54.756 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:24:54.757 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:24:54.757 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:24:54.757 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:24:54.757 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:24:54.757 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:24:54.757 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:24:54.757 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:24:54.757 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:24:54.757 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:24:54.757 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:24:54.757 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:24:54.757 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:24:54.757 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:24:54.757 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:24:54.757 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:24:54.757 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:24:54.757 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:54.757 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:24:54.757 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:24:54.757 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=104476 00:24:54.757 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:24:54.757 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:24:54.757 [global] 00:24:54.757 thread=1 00:24:54.757 invalidate=1 00:24:54.757 rw=randrw 00:24:54.757 time_based=1 00:24:54.757 runtime=6 00:24:54.757 ioengine=libaio 00:24:54.757 direct=1 00:24:54.757 bs=4096 00:24:54.757 iodepth=128 00:24:54.757 norandommap=0 00:24:54.757 numjobs=1 00:24:54.757 00:24:54.757 verify_dump=1 00:24:54.757 verify_backlog=512 00:24:54.757 verify_state_save=0 00:24:54.757 do_verify=1 00:24:54.757 verify=crc32c-intel 00:24:54.757 [job0] 00:24:54.757 filename=/dev/nvme0n1 00:24:54.757 Could not set queue depth (nvme0n1) 00:24:55.016 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:55.016 fio-3.35 00:24:55.016 Starting 1 thread 00:24:55.952 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:24:55.952 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:24:56.211 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:24:56.211 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:24:56.211 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:24:56.211 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:24:56.211 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:24:56.211 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:24:56.211 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:24:56.211 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:24:56.211 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:24:56.211 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:24:56.211 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:56.211 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:24:56.211 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:24:57.588 17:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:24:57.588 17:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:57.588 17:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:24:57.588 17:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:24:57.588 17:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:24:57.847 17:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:24:57.847 17:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:24:57.847 17:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:24:57.847 17:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:24:57.847 17:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:24:57.847 17:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:24:57.847 17:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:24:57.847 17:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:24:57.847 17:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:24:57.847 17:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:24:57.847 17:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:57.847 17:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:24:57.847 17:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:24:58.783 17:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:24:58.783 17:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:58.783 17:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:24:58.783 17:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 104476 00:25:01.316 00:25:01.316 job0: (groupid=0, jobs=1): err= 0: pid=104497: Tue Nov 19 17:23:53 2024 00:25:01.317 read: IOPS=12.8k, BW=50.1MiB/s (52.6MB/s)(301MiB/6005msec) 00:25:01.317 slat (nsec): min=1740, max=5685.2k, avg=44504.02, stdev=221325.42 00:25:01.317 clat (usec): min=775, max=13814, avg=6737.62, stdev=1158.14 00:25:01.317 lat (usec): min=1187, max=13832, avg=6782.12, stdev=1169.11 00:25:01.317 clat percentiles (usec): 00:25:01.317 | 1.00th=[ 3949], 5.00th=[ 4948], 10.00th=[ 5473], 20.00th=[ 5932], 00:25:01.317 | 30.00th=[ 6194], 40.00th=[ 6390], 50.00th=[ 6652], 60.00th=[ 6915], 00:25:01.317 | 70.00th=[ 7177], 80.00th=[ 7504], 90.00th=[ 8094], 95.00th=[ 8717], 00:25:01.317 | 99.00th=[10421], 99.50th=[10814], 99.90th=[11731], 99.95th=[11994], 00:25:01.317 | 99.99th=[13566] 00:25:01.317 bw ( KiB/s): min=10632, max=32064, per=52.31%, avg=26848.45, stdev=7783.39, samples=11 00:25:01.317 iops : min= 2658, max= 8016, avg=6712.09, stdev=1945.84, samples=11 00:25:01.317 write: IOPS=7479, BW=29.2MiB/s (30.6MB/s)(152MiB/5203msec); 0 zone resets 00:25:01.317 slat (usec): min=2, max=6542, avg=56.78, stdev=135.25 00:25:01.317 clat (usec): min=538, max=13363, avg=6227.73, stdev=955.44 00:25:01.317 lat (usec): min=772, max=13381, avg=6284.51, stdev=958.35 00:25:01.317 clat percentiles (usec): 00:25:01.317 | 1.00th=[ 3326], 5.00th=[ 4621], 10.00th=[ 5342], 20.00th=[ 5735], 00:25:01.317 | 30.00th=[ 5932], 40.00th=[ 6128], 50.00th=[ 6259], 60.00th=[ 6390], 00:25:01.317 | 70.00th=[ 6587], 80.00th=[ 6783], 90.00th=[ 7046], 95.00th=[ 7373], 00:25:01.317 | 99.00th=[ 9372], 99.50th=[10159], 99.90th=[11994], 99.95th=[12911], 00:25:01.317 | 99.99th=[13304] 00:25:01.317 bw ( KiB/s): min=10840, max=31984, per=89.59%, avg=26804.91, stdev=7652.35, samples=11 00:25:01.317 iops : min= 2710, max= 7996, avg=6701.18, stdev=1913.07, samples=11 00:25:01.317 lat (usec) : 750=0.01%, 1000=0.01% 00:25:01.317 lat (msec) : 2=0.06%, 4=1.59%, 10=97.09%, 20=1.25% 00:25:01.317 cpu : usr=5.81%, sys=23.61%, ctx=8789, majf=0, minf=102 00:25:01.317 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:25:01.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:01.317 issued rwts: total=77046,38915,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:01.317 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:01.317 00:25:01.317 Run status group 0 (all jobs): 00:25:01.317 READ: bw=50.1MiB/s (52.6MB/s), 50.1MiB/s-50.1MiB/s (52.6MB/s-52.6MB/s), io=301MiB (316MB), run=6005-6005msec 00:25:01.317 WRITE: bw=29.2MiB/s (30.6MB/s), 29.2MiB/s-29.2MiB/s (30.6MB/s-30.6MB/s), io=152MiB (159MB), run=5203-5203msec 00:25:01.317 00:25:01.317 Disk stats (read/write): 00:25:01.317 nvme0n1: ios=75871/38135, merge=0/0, ticks=476520/225595, in_queue=702115, util=98.65% 00:25:01.317 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:25:01.317 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:25:01.576 17:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:25:01.576 17:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:25:01.576 17:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:01.576 17:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:25:01.576 17:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:25:01.576 17:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:25:01.576 17:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:25:01.576 17:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:25:01.576 17:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:01.576 17:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:25:01.576 17:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:01.576 17:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:25:01.576 17:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:25:02.950 17:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:25:02.950 17:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:02.950 17:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:25:02.950 17:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:25:02.950 17:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=104627 00:25:02.950 17:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:25:02.950 17:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:25:02.950 [global] 00:25:02.950 thread=1 00:25:02.950 invalidate=1 00:25:02.950 rw=randrw 00:25:02.950 time_based=1 00:25:02.950 runtime=6 00:25:02.950 ioengine=libaio 00:25:02.950 direct=1 00:25:02.950 bs=4096 00:25:02.950 iodepth=128 00:25:02.950 norandommap=0 00:25:02.950 numjobs=1 00:25:02.950 00:25:02.950 verify_dump=1 00:25:02.950 verify_backlog=512 00:25:02.950 verify_state_save=0 00:25:02.950 do_verify=1 00:25:02.950 verify=crc32c-intel 00:25:02.950 [job0] 00:25:02.950 filename=/dev/nvme0n1 00:25:02.950 Could not set queue depth (nvme0n1) 00:25:02.950 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:02.950 fio-3.35 00:25:02.950 Starting 1 thread 00:25:03.887 17:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:25:03.887 17:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:25:04.146 17:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:25:04.146 17:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:25:04.146 17:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:04.146 17:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:25:04.146 17:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:25:04.146 17:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:25:04.146 17:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:25:04.146 17:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:25:04.146 17:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:04.146 17:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:25:04.146 17:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:04.146 17:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:25:04.146 17:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:25:05.561 17:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:25:05.561 17:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:05.561 17:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:25:05.561 17:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:05.561 17:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:25:05.820 17:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:25:05.820 17:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:25:05.820 17:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:05.820 17:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:25:05.821 17:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:25:05.821 17:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:25:05.821 17:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:25:05.821 17:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:25:05.821 17:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:05.821 17:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:25:05.821 17:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:05.821 17:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:25:05.821 17:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:25:06.757 17:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:25:06.757 17:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:06.757 17:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:25:06.757 17:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 104627 00:25:09.294 00:25:09.294 job0: (groupid=0, jobs=1): err= 0: pid=104648: Tue Nov 19 17:24:01 2024 00:25:09.294 read: IOPS=12.6k, BW=49.2MiB/s (51.6MB/s)(295MiB/6003msec) 00:25:09.294 slat (usec): min=5, max=5209, avg=39.98, stdev=211.93 00:25:09.294 clat (usec): min=408, max=17775, avg=6901.68, stdev=1746.26 00:25:09.294 lat (usec): min=421, max=17784, avg=6941.66, stdev=1754.19 00:25:09.294 clat percentiles (usec): 00:25:09.294 | 1.00th=[ 2442], 5.00th=[ 3884], 10.00th=[ 4883], 20.00th=[ 5932], 00:25:09.294 | 30.00th=[ 6325], 40.00th=[ 6587], 50.00th=[ 6849], 60.00th=[ 7111], 00:25:09.294 | 70.00th=[ 7373], 80.00th=[ 7898], 90.00th=[ 8979], 95.00th=[10028], 00:25:09.294 | 99.00th=[11994], 99.50th=[13042], 99.90th=[15139], 99.95th=[15664], 00:25:09.294 | 99.99th=[16712] 00:25:09.294 bw ( KiB/s): min= 7912, max=32528, per=52.76%, avg=26590.00, stdev=7811.50, samples=11 00:25:09.294 iops : min= 1978, max= 8132, avg=6647.45, stdev=1952.95, samples=11 00:25:09.294 write: IOPS=7411, BW=29.0MiB/s (30.4MB/s)(150MiB/5180msec); 0 zone resets 00:25:09.294 slat (usec): min=11, max=2801, avg=50.60, stdev=110.96 00:25:09.294 clat (usec): min=375, max=14755, avg=6243.98, stdev=1526.30 00:25:09.294 lat (usec): min=402, max=14781, avg=6294.58, stdev=1530.40 00:25:09.294 clat percentiles (usec): 00:25:09.294 | 1.00th=[ 2311], 5.00th=[ 3228], 10.00th=[ 3982], 20.00th=[ 5538], 00:25:09.294 | 30.00th=[ 5932], 40.00th=[ 6194], 50.00th=[ 6390], 60.00th=[ 6587], 00:25:09.294 | 70.00th=[ 6783], 80.00th=[ 7046], 90.00th=[ 7701], 95.00th=[ 8848], 00:25:09.294 | 99.00th=[10290], 99.50th=[11076], 99.90th=[13042], 99.95th=[13566], 00:25:09.294 | 99.99th=[14353] 00:25:09.294 bw ( KiB/s): min= 8264, max=33136, per=89.54%, avg=26545.55, stdev=7692.78, samples=11 00:25:09.294 iops : min= 2066, max= 8284, avg=6636.36, stdev=1923.23, samples=11 00:25:09.294 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.04% 00:25:09.294 lat (msec) : 2=0.51%, 4=6.33%, 10=89.20%, 20=3.89% 00:25:09.294 cpu : usr=5.90%, sys=24.44%, ctx=9197, majf=0, minf=127 00:25:09.294 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:25:09.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:09.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:09.294 issued rwts: total=75641,38391,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:09.294 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:09.294 00:25:09.294 Run status group 0 (all jobs): 00:25:09.294 READ: bw=49.2MiB/s (51.6MB/s), 49.2MiB/s-49.2MiB/s (51.6MB/s-51.6MB/s), io=295MiB (310MB), run=6003-6003msec 00:25:09.294 WRITE: bw=29.0MiB/s (30.4MB/s), 29.0MiB/s-29.0MiB/s (30.4MB/s-30.4MB/s), io=150MiB (157MB), run=5180-5180msec 00:25:09.294 00:25:09.294 Disk stats (read/write): 00:25:09.294 nvme0n1: ios=73843/38391, merge=0/0, ticks=479026/229063, in_queue=708089, util=98.61% 00:25:09.294 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:09.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:25:09.294 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:09.294 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:25:09.294 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:09.294 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:09.294 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:09.294 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:09.294 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:25:09.294 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:09.554 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:25:09.554 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:25:09.554 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:25:09.554 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:25:09.554 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:09.554 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:25:09.554 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:09.554 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:25:09.554 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:09.554 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:09.554 rmmod nvme_tcp 00:25:09.554 rmmod nvme_fabrics 00:25:09.554 rmmod nvme_keyring 00:25:09.554 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:09.554 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:25:09.554 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:25:09.554 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 104356 ']' 00:25:09.554 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 104356 00:25:09.554 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 104356 ']' 00:25:09.554 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 104356 00:25:09.554 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:25:09.554 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:09.554 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104356 00:25:09.554 killing process with pid 104356 00:25:09.554 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:09.554 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:09.554 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104356' 00:25:09.554 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 104356 00:25:09.554 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 104356 00:25:09.813 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:09.813 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:09.813 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:09.813 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:25:09.813 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:25:09.813 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:09.813 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:25:09.813 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:09.813 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:09.813 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:09.813 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:10.073 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:10.073 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:10.073 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:10.073 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:10.073 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:10.073 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:10.073 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:10.073 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:10.073 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:10.073 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:10.073 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:10.073 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:10.073 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.073 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:10.073 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.073 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:25:10.073 ************************************ 00:25:10.073 END TEST nvmf_target_multipath 00:25:10.073 ************************************ 00:25:10.073 00:25:10.073 real 0m20.517s 00:25:10.073 user 1m11.888s 00:25:10.073 sys 0m8.187s 00:25:10.073 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:10.073 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:10.332 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:25:10.332 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:10.332 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:10.332 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:10.332 ************************************ 00:25:10.332 START TEST nvmf_zcopy 00:25:10.332 ************************************ 00:25:10.332 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:25:10.332 * Looking for test storage... 00:25:10.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:10.332 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:10.332 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:25:10.332 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:10.332 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:10.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.333 --rc genhtml_branch_coverage=1 00:25:10.333 --rc genhtml_function_coverage=1 00:25:10.333 --rc genhtml_legend=1 00:25:10.333 --rc geninfo_all_blocks=1 00:25:10.333 --rc geninfo_unexecuted_blocks=1 00:25:10.333 00:25:10.333 ' 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:10.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.333 --rc genhtml_branch_coverage=1 00:25:10.333 --rc genhtml_function_coverage=1 00:25:10.333 --rc genhtml_legend=1 00:25:10.333 --rc geninfo_all_blocks=1 00:25:10.333 --rc geninfo_unexecuted_blocks=1 00:25:10.333 00:25:10.333 ' 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:10.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.333 --rc genhtml_branch_coverage=1 00:25:10.333 --rc genhtml_function_coverage=1 00:25:10.333 --rc genhtml_legend=1 00:25:10.333 --rc geninfo_all_blocks=1 00:25:10.333 --rc geninfo_unexecuted_blocks=1 00:25:10.333 00:25:10.333 ' 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:10.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.333 --rc genhtml_branch_coverage=1 00:25:10.333 --rc genhtml_function_coverage=1 00:25:10.333 --rc genhtml_legend=1 00:25:10.333 --rc geninfo_all_blocks=1 00:25:10.333 --rc geninfo_unexecuted_blocks=1 00:25:10.333 00:25:10.333 ' 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:25:10.333 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:10.334 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:10.334 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:10.334 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:10.334 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:10.334 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:10.334 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:10.334 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:10.334 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:10.334 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:10.334 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:25:10.334 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:10.334 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:10.334 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:10.334 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:10.334 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:10.334 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.334 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:10.334 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.593 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:10.593 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:10.593 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:10.593 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:10.593 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:10.593 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:10.593 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:10.593 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:10.593 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:10.593 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:10.593 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:10.593 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:10.593 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:10.593 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:10.593 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:10.593 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:10.593 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:10.593 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:10.593 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:10.593 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:10.593 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:10.593 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:10.594 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:10.594 Cannot find device "nvmf_init_br" 00:25:10.594 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:25:10.594 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:10.594 Cannot find device "nvmf_init_br2" 00:25:10.594 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:25:10.594 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:10.594 Cannot find device "nvmf_tgt_br" 00:25:10.594 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:25:10.594 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:10.594 Cannot find device "nvmf_tgt_br2" 00:25:10.594 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:25:10.594 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:10.594 Cannot find device "nvmf_init_br" 00:25:10.594 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:25:10.594 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:10.594 Cannot find device "nvmf_init_br2" 00:25:10.594 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:25:10.594 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:10.594 Cannot find device "nvmf_tgt_br" 00:25:10.594 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:25:10.594 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:10.594 Cannot find device "nvmf_tgt_br2" 00:25:10.594 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:25:10.594 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:10.594 Cannot find device "nvmf_br" 00:25:10.594 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:25:10.594 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:10.594 Cannot find device "nvmf_init_if" 00:25:10.594 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:25:10.594 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:10.594 Cannot find device "nvmf_init_if2" 00:25:10.594 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:25:10.594 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:10.594 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:10.594 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:25:10.594 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:10.594 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:10.594 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:25:10.594 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:10.594 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:10.594 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:10.594 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:10.594 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:10.594 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:10.594 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:10.594 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:10.594 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:10.594 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:10.594 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:10.594 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:10.594 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:10.594 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:10.594 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:10.594 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:10.594 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:10.594 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:10.594 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:10.594 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:10.594 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:10.853 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:10.853 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:10.853 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:10.853 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:10.853 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:10.853 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:10.853 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:10.853 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:10.853 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:10.853 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:10.853 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:10.853 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:10.853 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:10.853 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:25:10.853 00:25:10.853 --- 10.0.0.3 ping statistics --- 00:25:10.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.853 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:25:10.853 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:10.853 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:10.853 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:25:10.853 00:25:10.853 --- 10.0.0.4 ping statistics --- 00:25:10.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.853 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:25:10.853 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:10.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:10.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:25:10.853 00:25:10.853 --- 10.0.0.1 ping statistics --- 00:25:10.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.853 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:25:10.853 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:10.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:10.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:25:10.853 00:25:10.854 --- 10.0.0.2 ping statistics --- 00:25:10.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.854 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:25:10.854 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:10.854 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:25:10.854 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:10.854 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:10.854 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:10.854 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:10.854 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:10.854 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:10.854 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:10.854 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:25:10.854 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:10.854 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:10.854 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:10.854 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=104987 00:25:10.854 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:25:10.854 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 104987 00:25:10.854 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 104987 ']' 00:25:10.854 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.854 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:10.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.854 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.854 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:10.854 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:10.854 [2024-11-19 17:24:03.322691] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:10.854 [2024-11-19 17:24:03.323737] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:25:10.854 [2024-11-19 17:24:03.323823] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.113 [2024-11-19 17:24:03.464093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.113 [2024-11-19 17:24:03.504820] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.113 [2024-11-19 17:24:03.504865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.113 [2024-11-19 17:24:03.504874] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:11.113 [2024-11-19 17:24:03.504881] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:11.113 [2024-11-19 17:24:03.504887] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.113 [2024-11-19 17:24:03.505196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.113 [2024-11-19 17:24:03.590204] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:11.113 [2024-11-19 17:24:03.590486] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:11.113 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:11.113 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:25:11.113 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:11.113 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:11.113 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:11.373 [2024-11-19 17:24:03.673921] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:11.373 [2024-11-19 17:24:03.690067] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:11.373 malloc0 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:11.373 { 00:25:11.373 "params": { 00:25:11.373 "name": "Nvme$subsystem", 00:25:11.373 "trtype": "$TEST_TRANSPORT", 00:25:11.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.373 "adrfam": "ipv4", 00:25:11.373 "trsvcid": "$NVMF_PORT", 00:25:11.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.373 "hdgst": ${hdgst:-false}, 00:25:11.373 "ddgst": ${ddgst:-false} 00:25:11.373 }, 00:25:11.373 "method": "bdev_nvme_attach_controller" 00:25:11.373 } 00:25:11.373 EOF 00:25:11.373 )") 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:25:11.373 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:11.373 "params": { 00:25:11.373 "name": "Nvme1", 00:25:11.373 "trtype": "tcp", 00:25:11.373 "traddr": "10.0.0.3", 00:25:11.373 "adrfam": "ipv4", 00:25:11.373 "trsvcid": "4420", 00:25:11.373 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:11.373 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:11.373 "hdgst": false, 00:25:11.373 "ddgst": false 00:25:11.373 }, 00:25:11.373 "method": "bdev_nvme_attach_controller" 00:25:11.373 }' 00:25:11.373 [2024-11-19 17:24:03.788471] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:25:11.373 [2024-11-19 17:24:03.788580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105019 ] 00:25:11.632 [2024-11-19 17:24:03.943285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.632 [2024-11-19 17:24:04.002364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.891 Running I/O for 10 seconds... 00:25:13.765 7103.00 IOPS, 55.49 MiB/s [2024-11-19T17:24:07.234Z] 7140.50 IOPS, 55.79 MiB/s [2024-11-19T17:24:08.611Z] 7127.00 IOPS, 55.68 MiB/s [2024-11-19T17:24:09.545Z] 7137.75 IOPS, 55.76 MiB/s [2024-11-19T17:24:10.482Z] 7152.80 IOPS, 55.88 MiB/s [2024-11-19T17:24:11.418Z] 7148.33 IOPS, 55.85 MiB/s [2024-11-19T17:24:12.354Z] 7156.71 IOPS, 55.91 MiB/s [2024-11-19T17:24:13.290Z] 7156.88 IOPS, 55.91 MiB/s [2024-11-19T17:24:14.225Z] 7160.44 IOPS, 55.94 MiB/s [2024-11-19T17:24:14.484Z] 7165.40 IOPS, 55.98 MiB/s 00:25:21.951 Latency(us) 00:25:21.951 [2024-11-19T17:24:14.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.951 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:25:21.951 Verification LBA range: start 0x0 length 0x1000 00:25:21.951 Nvme1n1 : 10.02 7166.46 55.99 0.00 0.00 17808.25 2636.33 24427.05 00:25:21.951 [2024-11-19T17:24:14.484Z] =================================================================================================================== 00:25:21.951 [2024-11-19T17:24:14.484Z] Total : 7166.46 55.99 0.00 0.00 17808.25 2636.33 24427.05 00:25:21.951 17:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:25:21.951 17:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=105137 00:25:21.951 17:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:25:21.951 17:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:25:21.951 17:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:21.951 17:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:25:21.951 17:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:25:21.951 17:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:21.951 17:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:21.951 { 00:25:21.951 "params": { 00:25:21.951 "name": "Nvme$subsystem", 00:25:21.951 "trtype": "$TEST_TRANSPORT", 00:25:21.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:21.951 "adrfam": "ipv4", 00:25:21.951 "trsvcid": "$NVMF_PORT", 00:25:21.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:21.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:21.951 "hdgst": ${hdgst:-false}, 00:25:21.951 "ddgst": ${ddgst:-false} 00:25:21.951 }, 00:25:21.951 "method": "bdev_nvme_attach_controller" 00:25:21.951 } 00:25:21.951 EOF 00:25:21.951 )") 00:25:21.951 17:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:25:21.951 [2024-11-19 17:24:14.477630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:21.951 [2024-11-19 17:24:14.477671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:21.951 17:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:25:21.951 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.210 17:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:25:22.210 17:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:22.210 "params": { 00:25:22.210 "name": "Nvme1", 00:25:22.210 "trtype": "tcp", 00:25:22.210 "traddr": "10.0.0.3", 00:25:22.210 "adrfam": "ipv4", 00:25:22.210 "trsvcid": "4420", 00:25:22.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:22.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:22.210 "hdgst": false, 00:25:22.210 "ddgst": false 00:25:22.210 }, 00:25:22.210 "method": "bdev_nvme_attach_controller" 00:25:22.210 }' 00:25:22.210 [2024-11-19 17:24:14.489608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.210 [2024-11-19 17:24:14.489633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.210 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.210 [2024-11-19 17:24:14.501622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.210 [2024-11-19 17:24:14.501648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.210 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.210 [2024-11-19 17:24:14.513588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.210 [2024-11-19 17:24:14.513607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.210 [2024-11-19 17:24:14.515880] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:25:22.210 [2024-11-19 17:24:14.515964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105137 ] 00:25:22.210 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.210 [2024-11-19 17:24:14.525587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.210 [2024-11-19 17:24:14.525608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.210 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.210 [2024-11-19 17:24:14.537589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.210 [2024-11-19 17:24:14.537610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.210 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.210 [2024-11-19 17:24:14.549589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.210 [2024-11-19 17:24:14.549612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.210 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.210 [2024-11-19 17:24:14.561587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.210 [2024-11-19 17:24:14.561607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.210 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.210 [2024-11-19 17:24:14.573605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.210 [2024-11-19 17:24:14.573626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.210 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.210 [2024-11-19 17:24:14.585589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.210 [2024-11-19 17:24:14.585609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.210 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.210 [2024-11-19 17:24:14.597592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.210 [2024-11-19 17:24:14.597611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.210 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.210 [2024-11-19 17:24:14.609606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.210 [2024-11-19 17:24:14.609627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.210 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.210 [2024-11-19 17:24:14.621592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.210 [2024-11-19 17:24:14.621615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.210 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.210 [2024-11-19 17:24:14.633591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.210 [2024-11-19 17:24:14.633614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.210 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.210 [2024-11-19 17:24:14.645587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.210 [2024-11-19 17:24:14.645607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.210 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.210 [2024-11-19 17:24:14.656622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.210 [2024-11-19 17:24:14.657672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.210 [2024-11-19 17:24:14.657691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.210 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.210 [2024-11-19 17:24:14.669588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.210 [2024-11-19 17:24:14.669608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.210 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.210 [2024-11-19 17:24:14.681588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.210 [2024-11-19 17:24:14.681608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.210 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.210 [2024-11-19 17:24:14.693587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.210 [2024-11-19 17:24:14.693607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.210 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.210 [2024-11-19 17:24:14.704093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.210 [2024-11-19 17:24:14.705601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.210 [2024-11-19 17:24:14.705621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.210 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.210 [2024-11-19 17:24:14.717587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.210 [2024-11-19 17:24:14.717606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.211 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.211 [2024-11-19 17:24:14.729618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.211 [2024-11-19 17:24:14.729640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.211 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.470 [2024-11-19 17:24:14.741697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.470 [2024-11-19 17:24:14.741720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.470 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.470 [2024-11-19 17:24:14.753587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.470 [2024-11-19 17:24:14.753607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.470 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.470 [2024-11-19 17:24:14.765586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.470 [2024-11-19 17:24:14.765606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.470 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.470 [2024-11-19 17:24:14.777590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.470 [2024-11-19 17:24:14.777611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.470 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.470 [2024-11-19 17:24:14.789587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.470 [2024-11-19 17:24:14.789607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.470 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.470 [2024-11-19 17:24:14.801604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.470 [2024-11-19 17:24:14.801625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.470 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.470 [2024-11-19 17:24:14.813587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.470 [2024-11-19 17:24:14.813607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.470 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.470 [2024-11-19 17:24:14.825587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.470 [2024-11-19 17:24:14.825606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.470 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.470 [2024-11-19 17:24:14.837606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.470 [2024-11-19 17:24:14.837631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.470 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.470 [2024-11-19 17:24:14.849599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.470 [2024-11-19 17:24:14.849620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.470 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.470 [2024-11-19 17:24:14.861598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.470 [2024-11-19 17:24:14.861621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.470 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.470 [2024-11-19 17:24:14.873600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.470 [2024-11-19 17:24:14.873623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.470 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.470 [2024-11-19 17:24:14.885595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.470 [2024-11-19 17:24:14.885621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.470 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.470 [2024-11-19 17:24:14.897605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.470 [2024-11-19 17:24:14.897632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.470 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.470 Running I/O for 5 seconds... 00:25:22.470 [2024-11-19 17:24:14.916318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.470 [2024-11-19 17:24:14.916364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.470 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.470 [2024-11-19 17:24:14.930069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.470 [2024-11-19 17:24:14.930114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.470 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.470 [2024-11-19 17:24:14.947886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.470 [2024-11-19 17:24:14.947915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.470 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.470 [2024-11-19 17:24:14.961308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.470 [2024-11-19 17:24:14.961338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.470 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.470 [2024-11-19 17:24:14.976158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.470 [2024-11-19 17:24:14.976187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.470 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.470 [2024-11-19 17:24:14.992316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.470 [2024-11-19 17:24:14.992362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.470 2024/11/19 17:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.730 [2024-11-19 17:24:15.007320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.730 [2024-11-19 17:24:15.007352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.730 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.730 [2024-11-19 17:24:15.024744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.730 [2024-11-19 17:24:15.024782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.730 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.730 [2024-11-19 17:24:15.036614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.730 [2024-11-19 17:24:15.036644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.730 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.730 [2024-11-19 17:24:15.052477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.730 [2024-11-19 17:24:15.052506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.730 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.730 [2024-11-19 17:24:15.068597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.730 [2024-11-19 17:24:15.068627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.730 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.730 [2024-11-19 17:24:15.082277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.730 [2024-11-19 17:24:15.082323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.730 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.730 [2024-11-19 17:24:15.099967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.730 [2024-11-19 17:24:15.099996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.730 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.730 [2024-11-19 17:24:15.113284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.730 [2024-11-19 17:24:15.113314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.730 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.730 [2024-11-19 17:24:15.126222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.730 [2024-11-19 17:24:15.126251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.730 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.730 [2024-11-19 17:24:15.144800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.730 [2024-11-19 17:24:15.144830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.730 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.730 [2024-11-19 17:24:15.158265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.730 [2024-11-19 17:24:15.158295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.730 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.730 [2024-11-19 17:24:15.175962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.730 [2024-11-19 17:24:15.175993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.730 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.730 [2024-11-19 17:24:15.189611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.730 [2024-11-19 17:24:15.189642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.730 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.730 [2024-11-19 17:24:15.202466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.731 [2024-11-19 17:24:15.202495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.731 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.731 [2024-11-19 17:24:15.219560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.731 [2024-11-19 17:24:15.219590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.731 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.731 [2024-11-19 17:24:15.233475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.731 [2024-11-19 17:24:15.233508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.731 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.731 [2024-11-19 17:24:15.246682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.731 [2024-11-19 17:24:15.246712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.731 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.990 [2024-11-19 17:24:15.265433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.990 [2024-11-19 17:24:15.265479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.990 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.990 [2024-11-19 17:24:15.276229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.990 [2024-11-19 17:24:15.276275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.990 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.990 [2024-11-19 17:24:15.291703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.990 [2024-11-19 17:24:15.291732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.990 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.990 [2024-11-19 17:24:15.308910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.990 [2024-11-19 17:24:15.308938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.990 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.990 [2024-11-19 17:24:15.322045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.990 [2024-11-19 17:24:15.322074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.990 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.990 [2024-11-19 17:24:15.339706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.990 [2024-11-19 17:24:15.339747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.990 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.990 [2024-11-19 17:24:15.354617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.990 [2024-11-19 17:24:15.354646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.990 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.990 [2024-11-19 17:24:15.372702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.990 [2024-11-19 17:24:15.372757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.990 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.990 [2024-11-19 17:24:15.391145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.990 [2024-11-19 17:24:15.391176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.990 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.990 [2024-11-19 17:24:15.408943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.990 [2024-11-19 17:24:15.408973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.990 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.990 [2024-11-19 17:24:15.422727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.990 [2024-11-19 17:24:15.422767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.990 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.990 [2024-11-19 17:24:15.440970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.990 [2024-11-19 17:24:15.441000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.990 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.990 [2024-11-19 17:24:15.452203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.990 [2024-11-19 17:24:15.452249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.990 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.990 [2024-11-19 17:24:15.469033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.990 [2024-11-19 17:24:15.469065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.990 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.990 [2024-11-19 17:24:15.482216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.990 [2024-11-19 17:24:15.482263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.990 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.990 [2024-11-19 17:24:15.500176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.990 [2024-11-19 17:24:15.500222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.990 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:22.990 [2024-11-19 17:24:15.513852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:22.990 [2024-11-19 17:24:15.513897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:22.991 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.249 [2024-11-19 17:24:15.523146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.249 [2024-11-19 17:24:15.523192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.249 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.249 [2024-11-19 17:24:15.537170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.249 [2024-11-19 17:24:15.537215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.249 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.249 [2024-11-19 17:24:15.550904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.249 [2024-11-19 17:24:15.550952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.249 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.249 [2024-11-19 17:24:15.568351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.249 [2024-11-19 17:24:15.568399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.249 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.249 [2024-11-19 17:24:15.585038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.249 [2024-11-19 17:24:15.585084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.250 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.250 [2024-11-19 17:24:15.598993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.250 [2024-11-19 17:24:15.599038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.250 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.250 [2024-11-19 17:24:15.616633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.250 [2024-11-19 17:24:15.616681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.250 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.250 [2024-11-19 17:24:15.630575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.250 [2024-11-19 17:24:15.630621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.250 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.250 [2024-11-19 17:24:15.648846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.250 [2024-11-19 17:24:15.648891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.250 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.250 [2024-11-19 17:24:15.662638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.250 [2024-11-19 17:24:15.662686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.250 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.250 [2024-11-19 17:24:15.680784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.250 [2024-11-19 17:24:15.680829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.250 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.250 [2024-11-19 17:24:15.694279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.250 [2024-11-19 17:24:15.694326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.250 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.250 [2024-11-19 17:24:15.712180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.250 [2024-11-19 17:24:15.712209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.250 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.250 [2024-11-19 17:24:15.726505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.250 [2024-11-19 17:24:15.726535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.250 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.250 [2024-11-19 17:24:15.743536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.250 [2024-11-19 17:24:15.743581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.250 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.250 [2024-11-19 17:24:15.760731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.250 [2024-11-19 17:24:15.760787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.250 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.250 [2024-11-19 17:24:15.772750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.250 [2024-11-19 17:24:15.772806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.250 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.510 [2024-11-19 17:24:15.786531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.510 [2024-11-19 17:24:15.786562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.510 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.510 [2024-11-19 17:24:15.804645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.510 [2024-11-19 17:24:15.804674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.510 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.510 [2024-11-19 17:24:15.817948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.510 [2024-11-19 17:24:15.817978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.510 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.510 [2024-11-19 17:24:15.835415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.510 [2024-11-19 17:24:15.835444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.510 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.510 [2024-11-19 17:24:15.852579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.510 [2024-11-19 17:24:15.852609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.510 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.510 [2024-11-19 17:24:15.866931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.510 [2024-11-19 17:24:15.866961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.510 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.510 [2024-11-19 17:24:15.884717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.510 [2024-11-19 17:24:15.884773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.510 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.510 [2024-11-19 17:24:15.898760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.510 [2024-11-19 17:24:15.898805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.510 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.510 13672.00 IOPS, 106.81 MiB/s [2024-11-19T17:24:16.043Z] [2024-11-19 17:24:15.916273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.510 [2024-11-19 17:24:15.916319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.510 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.510 [2024-11-19 17:24:15.930956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.510 [2024-11-19 17:24:15.930985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.510 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.510 [2024-11-19 17:24:15.948425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.510 [2024-11-19 17:24:15.948485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.510 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.510 [2024-11-19 17:24:15.962353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.510 [2024-11-19 17:24:15.962383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.510 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.510 [2024-11-19 17:24:15.979506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.510 [2024-11-19 17:24:15.979536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.510 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.510 [2024-11-19 17:24:15.995575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.510 [2024-11-19 17:24:15.995605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.510 2024/11/19 17:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.510 [2024-11-19 17:24:16.012213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.510 [2024-11-19 17:24:16.012258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.510 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.510 [2024-11-19 17:24:16.027597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.510 [2024-11-19 17:24:16.027626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.510 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.769 [2024-11-19 17:24:16.045628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.769 [2024-11-19 17:24:16.045656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.769 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.769 [2024-11-19 17:24:16.056313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.769 [2024-11-19 17:24:16.056345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.769 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.769 [2024-11-19 17:24:16.072416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.769 [2024-11-19 17:24:16.072462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.769 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.769 [2024-11-19 17:24:16.085581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.769 [2024-11-19 17:24:16.085612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.769 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.769 [2024-11-19 17:24:16.098502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.769 [2024-11-19 17:24:16.098533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.769 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.769 [2024-11-19 17:24:16.116179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.769 [2024-11-19 17:24:16.116224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.769 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.769 [2024-11-19 17:24:16.129612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.770 [2024-11-19 17:24:16.129640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.770 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.770 [2024-11-19 17:24:16.142717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.770 [2024-11-19 17:24:16.142757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.770 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.770 [2024-11-19 17:24:16.160511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.770 [2024-11-19 17:24:16.160541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.770 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.770 [2024-11-19 17:24:16.174690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.770 [2024-11-19 17:24:16.174721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.770 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.770 [2024-11-19 17:24:16.192387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.770 [2024-11-19 17:24:16.192434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.770 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.770 [2024-11-19 17:24:16.206304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.770 [2024-11-19 17:24:16.206333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.770 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.770 [2024-11-19 17:24:16.223866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.770 [2024-11-19 17:24:16.223895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.770 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.770 [2024-11-19 17:24:16.240192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.770 [2024-11-19 17:24:16.240237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.770 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.770 [2024-11-19 17:24:16.254780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.770 [2024-11-19 17:24:16.254809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.770 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.770 [2024-11-19 17:24:16.271949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.770 [2024-11-19 17:24:16.271979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.770 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.770 [2024-11-19 17:24:16.285348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.770 [2024-11-19 17:24:16.285378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:23.770 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:23.770 [2024-11-19 17:24:16.298739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:23.770 [2024-11-19 17:24:16.298797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.029 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.029 [2024-11-19 17:24:16.316412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.029 [2024-11-19 17:24:16.316460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.029 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.029 [2024-11-19 17:24:16.330985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.029 [2024-11-19 17:24:16.331014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.029 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.029 [2024-11-19 17:24:16.348441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.029 [2024-11-19 17:24:16.348486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.029 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.029 [2024-11-19 17:24:16.362100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.029 [2024-11-19 17:24:16.362130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.029 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.029 [2024-11-19 17:24:16.379479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.029 [2024-11-19 17:24:16.379509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.029 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.029 [2024-11-19 17:24:16.393104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.029 [2024-11-19 17:24:16.393135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.029 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.029 [2024-11-19 17:24:16.408241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.029 [2024-11-19 17:24:16.408287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.029 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.029 [2024-11-19 17:24:16.424824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.029 [2024-11-19 17:24:16.424869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.029 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.029 [2024-11-19 17:24:16.438451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.029 [2024-11-19 17:24:16.438481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.029 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.029 [2024-11-19 17:24:16.455852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.029 [2024-11-19 17:24:16.455881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.029 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.029 [2024-11-19 17:24:16.471484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.029 [2024-11-19 17:24:16.471514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.029 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.029 [2024-11-19 17:24:16.488892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.029 [2024-11-19 17:24:16.488922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.030 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.030 [2024-11-19 17:24:16.501846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.030 [2024-11-19 17:24:16.501876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.030 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.030 [2024-11-19 17:24:16.518963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.030 [2024-11-19 17:24:16.518993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.030 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.030 [2024-11-19 17:24:16.536103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.030 [2024-11-19 17:24:16.536157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.030 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.030 [2024-11-19 17:24:16.550433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.030 [2024-11-19 17:24:16.550463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.030 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.289 [2024-11-19 17:24:16.569274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.289 [2024-11-19 17:24:16.569306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.289 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.289 [2024-11-19 17:24:16.582771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.289 [2024-11-19 17:24:16.582801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.289 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.289 [2024-11-19 17:24:16.600444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.289 [2024-11-19 17:24:16.600474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.289 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.289 [2024-11-19 17:24:16.613989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.289 [2024-11-19 17:24:16.614017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.289 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.289 [2024-11-19 17:24:16.631132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.289 [2024-11-19 17:24:16.631162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.289 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.289 [2024-11-19 17:24:16.647914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.289 [2024-11-19 17:24:16.647943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.289 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.289 [2024-11-19 17:24:16.663872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.289 [2024-11-19 17:24:16.663900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.289 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.289 [2024-11-19 17:24:16.680302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.289 [2024-11-19 17:24:16.680332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.289 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.289 [2024-11-19 17:24:16.695013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.289 [2024-11-19 17:24:16.695059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.289 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.289 [2024-11-19 17:24:16.712902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.289 [2024-11-19 17:24:16.712948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.289 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.289 [2024-11-19 17:24:16.725545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.289 [2024-11-19 17:24:16.725589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.289 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.289 [2024-11-19 17:24:16.738645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.289 [2024-11-19 17:24:16.738691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.289 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.289 [2024-11-19 17:24:16.756532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.289 [2024-11-19 17:24:16.756579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.289 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.289 [2024-11-19 17:24:16.770278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.289 [2024-11-19 17:24:16.770325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.289 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.289 [2024-11-19 17:24:16.788480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.289 [2024-11-19 17:24:16.788527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.290 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.290 [2024-11-19 17:24:16.802045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.290 [2024-11-19 17:24:16.802091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.290 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.290 [2024-11-19 17:24:16.819971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.290 [2024-11-19 17:24:16.820017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.549 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.549 [2024-11-19 17:24:16.835417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.549 [2024-11-19 17:24:16.835463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.549 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.549 [2024-11-19 17:24:16.852268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.549 [2024-11-19 17:24:16.852313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.549 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.549 [2024-11-19 17:24:16.868553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.549 [2024-11-19 17:24:16.868599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.549 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.549 [2024-11-19 17:24:16.884310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.549 [2024-11-19 17:24:16.884357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.549 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.549 [2024-11-19 17:24:16.900342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.549 [2024-11-19 17:24:16.900390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.549 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.549 13813.50 IOPS, 107.92 MiB/s [2024-11-19T17:24:17.082Z] [2024-11-19 17:24:16.915838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.549 [2024-11-19 17:24:16.915884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.549 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.549 [2024-11-19 17:24:16.932991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.549 [2024-11-19 17:24:16.933036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.549 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.549 [2024-11-19 17:24:16.947195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.549 [2024-11-19 17:24:16.947224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.549 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.549 [2024-11-19 17:24:16.964559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.549 [2024-11-19 17:24:16.964589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.549 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.549 [2024-11-19 17:24:16.976973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.549 [2024-11-19 17:24:16.977003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.549 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.549 [2024-11-19 17:24:16.992640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.549 [2024-11-19 17:24:16.992670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.549 2024/11/19 17:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.549 [2024-11-19 17:24:17.008768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.549 [2024-11-19 17:24:17.008796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.549 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.549 [2024-11-19 17:24:17.022172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.549 [2024-11-19 17:24:17.022217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.549 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.549 [2024-11-19 17:24:17.040160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.549 [2024-11-19 17:24:17.040205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.549 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.549 [2024-11-19 17:24:17.053785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.549 [2024-11-19 17:24:17.053815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.549 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.549 [2024-11-19 17:24:17.064015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.550 [2024-11-19 17:24:17.064045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.550 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.808 [2024-11-19 17:24:17.081550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.808 [2024-11-19 17:24:17.081585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.808 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.808 [2024-11-19 17:24:17.092686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.808 [2024-11-19 17:24:17.092715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.808 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.808 [2024-11-19 17:24:17.107886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.808 [2024-11-19 17:24:17.107926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.809 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.809 [2024-11-19 17:24:17.124242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.809 [2024-11-19 17:24:17.124288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.809 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.809 [2024-11-19 17:24:17.140363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.809 [2024-11-19 17:24:17.140408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.809 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.809 [2024-11-19 17:24:17.155926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.809 [2024-11-19 17:24:17.155956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.809 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.809 [2024-11-19 17:24:17.172400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.809 [2024-11-19 17:24:17.172445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.809 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.809 [2024-11-19 17:24:17.185824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.809 [2024-11-19 17:24:17.185853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.809 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.809 [2024-11-19 17:24:17.203904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.809 [2024-11-19 17:24:17.203933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.809 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.809 [2024-11-19 17:24:17.217134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.809 [2024-11-19 17:24:17.217164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.809 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.809 [2024-11-19 17:24:17.232605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.809 [2024-11-19 17:24:17.232634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.809 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.809 [2024-11-19 17:24:17.245608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.809 [2024-11-19 17:24:17.245637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.809 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.809 [2024-11-19 17:24:17.258560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.809 [2024-11-19 17:24:17.258589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.809 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.809 [2024-11-19 17:24:17.275424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.809 [2024-11-19 17:24:17.275454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.809 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.809 [2024-11-19 17:24:17.291003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.809 [2024-11-19 17:24:17.291049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.809 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.809 [2024-11-19 17:24:17.308290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.809 [2024-11-19 17:24:17.308320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.809 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:24.809 [2024-11-19 17:24:17.324160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:24.809 [2024-11-19 17:24:17.324205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:24.809 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.068 [2024-11-19 17:24:17.342806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.068 [2024-11-19 17:24:17.342882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.068 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.068 [2024-11-19 17:24:17.360637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.068 [2024-11-19 17:24:17.360668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.068 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.068 [2024-11-19 17:24:17.375168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.068 [2024-11-19 17:24:17.375198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.068 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.068 [2024-11-19 17:24:17.392668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.068 [2024-11-19 17:24:17.392697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.068 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.068 [2024-11-19 17:24:17.406129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.068 [2024-11-19 17:24:17.406159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.068 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.068 [2024-11-19 17:24:17.424312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.068 [2024-11-19 17:24:17.424358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.068 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.068 [2024-11-19 17:24:17.442264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.068 [2024-11-19 17:24:17.442295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.068 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.068 [2024-11-19 17:24:17.459992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.068 [2024-11-19 17:24:17.460022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.068 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.068 [2024-11-19 17:24:17.475422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.068 [2024-11-19 17:24:17.475452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.068 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.068 [2024-11-19 17:24:17.492255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.068 [2024-11-19 17:24:17.492299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.068 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.068 [2024-11-19 17:24:17.507655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.068 [2024-11-19 17:24:17.507684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.068 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.068 [2024-11-19 17:24:17.524879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.068 [2024-11-19 17:24:17.524908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.068 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.068 [2024-11-19 17:24:17.538499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.068 [2024-11-19 17:24:17.538528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.068 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.068 [2024-11-19 17:24:17.555560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.068 [2024-11-19 17:24:17.555593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.068 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.068 [2024-11-19 17:24:17.571539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.068 [2024-11-19 17:24:17.571569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.068 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.068 [2024-11-19 17:24:17.588580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.068 [2024-11-19 17:24:17.588722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.068 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.327 [2024-11-19 17:24:17.604440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.327 [2024-11-19 17:24:17.604613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.327 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.327 [2024-11-19 17:24:17.620587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.327 [2024-11-19 17:24:17.620718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.327 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.327 [2024-11-19 17:24:17.636076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.327 [2024-11-19 17:24:17.636275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.327 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.327 [2024-11-19 17:24:17.652569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.327 [2024-11-19 17:24:17.652599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.327 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.327 [2024-11-19 17:24:17.667118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.327 [2024-11-19 17:24:17.667251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.327 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.327 [2024-11-19 17:24:17.685160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.327 [2024-11-19 17:24:17.685190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.327 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.328 [2024-11-19 17:24:17.698367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.328 [2024-11-19 17:24:17.698499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.328 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.328 [2024-11-19 17:24:17.715556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.328 [2024-11-19 17:24:17.715587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.328 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.328 [2024-11-19 17:24:17.732488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.328 [2024-11-19 17:24:17.732520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.328 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.328 [2024-11-19 17:24:17.747411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.328 [2024-11-19 17:24:17.747441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.328 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.328 [2024-11-19 17:24:17.764015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.328 [2024-11-19 17:24:17.764045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.328 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.328 [2024-11-19 17:24:17.778710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.328 [2024-11-19 17:24:17.778752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.328 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.328 [2024-11-19 17:24:17.796402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.328 [2024-11-19 17:24:17.796432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.328 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.328 [2024-11-19 17:24:17.809933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.328 [2024-11-19 17:24:17.809962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.328 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.328 [2024-11-19 17:24:17.827480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.328 [2024-11-19 17:24:17.827509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.328 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.328 [2024-11-19 17:24:17.843578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.328 [2024-11-19 17:24:17.843608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.328 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.587 [2024-11-19 17:24:17.862160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.587 [2024-11-19 17:24:17.862205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.587 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.587 [2024-11-19 17:24:17.880714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.587 [2024-11-19 17:24:17.880758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.587 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.587 [2024-11-19 17:24:17.899305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.587 [2024-11-19 17:24:17.899334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.587 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.587 13860.67 IOPS, 108.29 MiB/s [2024-11-19T17:24:18.120Z] [2024-11-19 17:24:17.915196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.587 [2024-11-19 17:24:17.915225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.587 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.587 [2024-11-19 17:24:17.932297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.587 [2024-11-19 17:24:17.932329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.587 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.587 [2024-11-19 17:24:17.946951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.587 [2024-11-19 17:24:17.946982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.587 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.587 [2024-11-19 17:24:17.964102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.587 [2024-11-19 17:24:17.964154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.587 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.587 [2024-11-19 17:24:17.980307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.587 [2024-11-19 17:24:17.980338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.587 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.587 [2024-11-19 17:24:17.996513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.587 [2024-11-19 17:24:17.996544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.587 2024/11/19 17:24:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.587 [2024-11-19 17:24:18.011322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.587 [2024-11-19 17:24:18.011352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.587 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.587 [2024-11-19 17:24:18.028553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.587 [2024-11-19 17:24:18.028582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.587 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.587 [2024-11-19 17:24:18.045295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.587 [2024-11-19 17:24:18.045447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.587 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.587 [2024-11-19 17:24:18.059331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.587 [2024-11-19 17:24:18.059360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.587 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.587 [2024-11-19 17:24:18.076158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.587 [2024-11-19 17:24:18.076306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.587 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.587 [2024-11-19 17:24:18.090016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.588 [2024-11-19 17:24:18.090045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.588 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.588 [2024-11-19 17:24:18.107927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.588 [2024-11-19 17:24:18.107957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.588 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.847 [2024-11-19 17:24:18.121526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.847 [2024-11-19 17:24:18.121578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.847 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.847 [2024-11-19 17:24:18.134318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.847 [2024-11-19 17:24:18.134348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.847 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.847 [2024-11-19 17:24:18.152594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.847 [2024-11-19 17:24:18.152626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.847 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.847 [2024-11-19 17:24:18.165083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.847 [2024-11-19 17:24:18.165113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.847 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.847 [2024-11-19 17:24:18.179878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.847 [2024-11-19 17:24:18.179907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.847 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.847 [2024-11-19 17:24:18.196866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.847 [2024-11-19 17:24:18.196895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.847 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.847 [2024-11-19 17:24:18.208704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.847 [2024-11-19 17:24:18.208734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.847 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.847 [2024-11-19 17:24:18.224079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.847 [2024-11-19 17:24:18.224110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.847 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.847 [2024-11-19 17:24:18.240982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.847 [2024-11-19 17:24:18.241010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.847 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.847 [2024-11-19 17:24:18.255113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.847 [2024-11-19 17:24:18.255248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.847 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.847 [2024-11-19 17:24:18.271845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.847 [2024-11-19 17:24:18.271874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.847 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.847 [2024-11-19 17:24:18.288550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.847 [2024-11-19 17:24:18.288580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.847 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.847 [2024-11-19 17:24:18.304306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.847 [2024-11-19 17:24:18.304336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.847 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.847 [2024-11-19 17:24:18.319163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.847 [2024-11-19 17:24:18.319192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.847 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.847 [2024-11-19 17:24:18.336213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.847 [2024-11-19 17:24:18.336242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.847 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.847 [2024-11-19 17:24:18.352234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.847 [2024-11-19 17:24:18.352259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.847 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.847 [2024-11-19 17:24:18.368557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:25.847 [2024-11-19 17:24:18.368582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.847 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.107 [2024-11-19 17:24:18.383798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.107 [2024-11-19 17:24:18.383822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.107 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.107 [2024-11-19 17:24:18.400693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.107 [2024-11-19 17:24:18.400719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.107 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.107 [2024-11-19 17:24:18.414962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.107 [2024-11-19 17:24:18.414986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.107 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.107 [2024-11-19 17:24:18.432148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.107 [2024-11-19 17:24:18.432172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.107 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.107 [2024-11-19 17:24:18.447808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.107 [2024-11-19 17:24:18.447831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.107 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.107 [2024-11-19 17:24:18.464992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.107 [2024-11-19 17:24:18.465019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.107 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.107 [2024-11-19 17:24:18.478530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.107 [2024-11-19 17:24:18.478556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.107 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.107 [2024-11-19 17:24:18.496032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.107 [2024-11-19 17:24:18.496058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.107 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.107 [2024-11-19 17:24:18.509980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.107 [2024-11-19 17:24:18.510005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.107 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.107 [2024-11-19 17:24:18.527796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.107 [2024-11-19 17:24:18.527820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.107 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.107 [2024-11-19 17:24:18.541076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.107 [2024-11-19 17:24:18.541101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.107 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.107 [2024-11-19 17:24:18.554569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.107 [2024-11-19 17:24:18.554594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.107 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.107 [2024-11-19 17:24:18.572444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.107 [2024-11-19 17:24:18.572484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.107 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.107 [2024-11-19 17:24:18.586052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.107 [2024-11-19 17:24:18.586077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.107 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.107 [2024-11-19 17:24:18.603297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.107 [2024-11-19 17:24:18.603322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.107 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.107 [2024-11-19 17:24:18.620140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.107 [2024-11-19 17:24:18.620164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.107 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.107 [2024-11-19 17:24:18.635455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.107 [2024-11-19 17:24:18.635480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.366 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.366 [2024-11-19 17:24:18.652248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.366 [2024-11-19 17:24:18.652274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.366 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.366 [2024-11-19 17:24:18.668366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.366 [2024-11-19 17:24:18.668392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.366 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.366 [2024-11-19 17:24:18.681452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.366 [2024-11-19 17:24:18.681477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.366 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.366 [2024-11-19 17:24:18.694792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.366 [2024-11-19 17:24:18.694816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.366 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.366 [2024-11-19 17:24:18.712347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.366 [2024-11-19 17:24:18.712383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.366 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.366 [2024-11-19 17:24:18.726783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.366 [2024-11-19 17:24:18.726807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.366 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.366 [2024-11-19 17:24:18.744381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.366 [2024-11-19 17:24:18.744406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.366 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.366 [2024-11-19 17:24:18.759797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.366 [2024-11-19 17:24:18.759824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.367 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.367 [2024-11-19 17:24:18.776174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.367 [2024-11-19 17:24:18.776212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.367 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.367 [2024-11-19 17:24:18.792540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.367 [2024-11-19 17:24:18.792566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.367 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.367 [2024-11-19 17:24:18.807977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.367 [2024-11-19 17:24:18.808002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.367 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.367 [2024-11-19 17:24:18.824397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.367 [2024-11-19 17:24:18.824437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.367 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.367 [2024-11-19 17:24:18.840518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.367 [2024-11-19 17:24:18.840543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.367 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.367 [2024-11-19 17:24:18.854284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.367 [2024-11-19 17:24:18.854309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.367 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.367 [2024-11-19 17:24:18.871220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.367 [2024-11-19 17:24:18.871245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.367 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.367 [2024-11-19 17:24:18.887601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.367 [2024-11-19 17:24:18.887627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.367 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.627 [2024-11-19 17:24:18.905248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.627 [2024-11-19 17:24:18.905275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.627 13893.25 IOPS, 108.54 MiB/s [2024-11-19T17:24:19.160Z] 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.627 [2024-11-19 17:24:18.920214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.627 [2024-11-19 17:24:18.920256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.627 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.627 [2024-11-19 17:24:18.936084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.627 [2024-11-19 17:24:18.936109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.627 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.627 [2024-11-19 17:24:18.952557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.627 [2024-11-19 17:24:18.952582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.627 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.627 [2024-11-19 17:24:18.968342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.627 [2024-11-19 17:24:18.968367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.627 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.627 [2024-11-19 17:24:18.985043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.627 [2024-11-19 17:24:18.985069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.627 2024/11/19 17:24:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.627 [2024-11-19 17:24:18.998227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.627 [2024-11-19 17:24:18.998252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.627 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.627 [2024-11-19 17:24:19.015333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.627 [2024-11-19 17:24:19.015359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.627 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.627 [2024-11-19 17:24:19.032632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.627 [2024-11-19 17:24:19.032657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.627 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.627 [2024-11-19 17:24:19.047140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.627 [2024-11-19 17:24:19.047165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.627 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.627 [2024-11-19 17:24:19.064718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.627 [2024-11-19 17:24:19.064754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.627 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.627 [2024-11-19 17:24:19.078384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.627 [2024-11-19 17:24:19.078409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.627 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.627 [2024-11-19 17:24:19.095834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.627 [2024-11-19 17:24:19.095861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.627 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.627 [2024-11-19 17:24:19.111374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.627 [2024-11-19 17:24:19.111399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.627 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.627 [2024-11-19 17:24:19.129000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.627 [2024-11-19 17:24:19.129026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.627 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.627 [2024-11-19 17:24:19.140886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.627 [2024-11-19 17:24:19.140911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.627 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.627 [2024-11-19 17:24:19.157046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.627 [2024-11-19 17:24:19.157071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.901 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.901 [2024-11-19 17:24:19.169251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.901 [2024-11-19 17:24:19.169276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.901 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.901 [2024-11-19 17:24:19.182723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.901 [2024-11-19 17:24:19.182757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.901 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.901 [2024-11-19 17:24:19.200003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.901 [2024-11-19 17:24:19.200028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.901 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.901 [2024-11-19 17:24:19.215137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.901 [2024-11-19 17:24:19.215162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.901 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.901 [2024-11-19 17:24:19.232873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.901 [2024-11-19 17:24:19.232897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.901 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.901 [2024-11-19 17:24:19.245192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.901 [2024-11-19 17:24:19.245217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.901 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.901 [2024-11-19 17:24:19.258772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.901 [2024-11-19 17:24:19.258796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.901 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.901 [2024-11-19 17:24:19.276530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.901 [2024-11-19 17:24:19.276556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.901 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.901 [2024-11-19 17:24:19.290734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.901 [2024-11-19 17:24:19.290778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.901 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.901 [2024-11-19 17:24:19.308067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.901 [2024-11-19 17:24:19.308093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.901 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.901 [2024-11-19 17:24:19.323505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.901 [2024-11-19 17:24:19.323531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.901 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.902 [2024-11-19 17:24:19.340399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.902 [2024-11-19 17:24:19.340424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.902 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.902 [2024-11-19 17:24:19.356449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.902 [2024-11-19 17:24:19.356473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.902 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.902 [2024-11-19 17:24:19.371424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.902 [2024-11-19 17:24:19.371450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.902 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.902 [2024-11-19 17:24:19.388843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.902 [2024-11-19 17:24:19.388869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.902 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.902 [2024-11-19 17:24:19.402925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.902 [2024-11-19 17:24:19.402952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.902 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:26.902 [2024-11-19 17:24:19.421158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:26.902 [2024-11-19 17:24:19.421186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:26.902 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.161 [2024-11-19 17:24:19.440354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.161 [2024-11-19 17:24:19.440396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.161 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.161 [2024-11-19 17:24:19.456370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.161 [2024-11-19 17:24:19.456394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.161 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.161 [2024-11-19 17:24:19.472676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.161 [2024-11-19 17:24:19.472702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.161 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.161 [2024-11-19 17:24:19.490843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.161 [2024-11-19 17:24:19.490868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.161 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.161 [2024-11-19 17:24:19.507969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.161 [2024-11-19 17:24:19.507995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.161 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.161 [2024-11-19 17:24:19.525169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.161 [2024-11-19 17:24:19.525194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.161 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.161 [2024-11-19 17:24:19.537223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.161 [2024-11-19 17:24:19.537247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.161 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.161 [2024-11-19 17:24:19.551735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.161 [2024-11-19 17:24:19.551771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.161 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.161 [2024-11-19 17:24:19.568526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.161 [2024-11-19 17:24:19.568551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.161 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.161 [2024-11-19 17:24:19.583286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.161 [2024-11-19 17:24:19.583312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.161 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.161 [2024-11-19 17:24:19.600650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.161 [2024-11-19 17:24:19.600675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.161 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.161 [2024-11-19 17:24:19.612907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.161 [2024-11-19 17:24:19.612932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.161 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.161 [2024-11-19 17:24:19.627992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.161 [2024-11-19 17:24:19.628017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.161 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.161 [2024-11-19 17:24:19.644223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.161 [2024-11-19 17:24:19.644279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.161 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.161 [2024-11-19 17:24:19.659864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.161 [2024-11-19 17:24:19.659889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.161 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.161 [2024-11-19 17:24:19.676428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.161 [2024-11-19 17:24:19.676453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.161 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.420 [2024-11-19 17:24:19.692620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.420 [2024-11-19 17:24:19.692647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.420 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.420 [2024-11-19 17:24:19.706786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.420 [2024-11-19 17:24:19.706810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.420 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.420 [2024-11-19 17:24:19.724170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.420 [2024-11-19 17:24:19.724208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.420 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.420 [2024-11-19 17:24:19.739877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.420 [2024-11-19 17:24:19.739902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.420 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.420 [2024-11-19 17:24:19.757344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.420 [2024-11-19 17:24:19.757371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.420 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.420 [2024-11-19 17:24:19.775082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.420 [2024-11-19 17:24:19.775107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.420 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.420 [2024-11-19 17:24:19.791619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.420 [2024-11-19 17:24:19.791645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.420 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.420 [2024-11-19 17:24:19.808487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.420 [2024-11-19 17:24:19.808512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.420 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.420 [2024-11-19 17:24:19.825038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.420 [2024-11-19 17:24:19.825063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.420 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.420 [2024-11-19 17:24:19.838278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.420 [2024-11-19 17:24:19.838303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.420 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.420 [2024-11-19 17:24:19.855292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.420 [2024-11-19 17:24:19.855318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.420 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.420 [2024-11-19 17:24:19.871199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.420 [2024-11-19 17:24:19.871224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.420 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.420 [2024-11-19 17:24:19.889085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.420 [2024-11-19 17:24:19.889110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.420 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.420 [2024-11-19 17:24:19.902015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.420 [2024-11-19 17:24:19.902040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.420 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.420 13920.40 IOPS, 108.75 MiB/s [2024-11-19T17:24:19.953Z] [2024-11-19 17:24:19.916178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.420 [2024-11-19 17:24:19.916232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.420 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.420 00:25:27.420 Latency(us) 00:25:27.420 [2024-11-19T17:24:19.953Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.420 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:25:27.420 Nvme1n1 : 5.01 13924.21 108.78 0.00 0.00 9181.21 2293.76 15966.95 00:25:27.420 [2024-11-19T17:24:19.953Z] =================================================================================================================== 00:25:27.420 [2024-11-19T17:24:19.953Z] Total : 13924.21 108.78 0.00 0.00 9181.21 2293.76 15966.95 00:25:27.420 [2024-11-19 17:24:19.925618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.420 [2024-11-19 17:24:19.925643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.420 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.420 [2024-11-19 17:24:19.937596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.421 [2024-11-19 17:24:19.937619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.421 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.421 [2024-11-19 17:24:19.949712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.421 [2024-11-19 17:24:19.949757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.680 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.680 [2024-11-19 17:24:19.961586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.680 [2024-11-19 17:24:19.961605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.680 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.680 [2024-11-19 17:24:19.973587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.680 [2024-11-19 17:24:19.973606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.680 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.680 [2024-11-19 17:24:19.985606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.680 [2024-11-19 17:24:19.985627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.680 2024/11/19 17:24:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.680 [2024-11-19 17:24:19.997588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.680 [2024-11-19 17:24:19.997607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.680 2024/11/19 17:24:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.680 [2024-11-19 17:24:20.009617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.680 [2024-11-19 17:24:20.009639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.680 2024/11/19 17:24:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.680 [2024-11-19 17:24:20.021648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.680 [2024-11-19 17:24:20.021671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.680 2024/11/19 17:24:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.680 [2024-11-19 17:24:20.033608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.680 [2024-11-19 17:24:20.033628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.680 2024/11/19 17:24:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.680 [2024-11-19 17:24:20.045605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.680 [2024-11-19 17:24:20.045624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.680 2024/11/19 17:24:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.680 [2024-11-19 17:24:20.057636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.680 [2024-11-19 17:24:20.057656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.680 2024/11/19 17:24:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.680 [2024-11-19 17:24:20.069604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.680 [2024-11-19 17:24:20.069623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.681 2024/11/19 17:24:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.681 [2024-11-19 17:24:20.081589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.681 [2024-11-19 17:24:20.081608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.681 2024/11/19 17:24:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.681 [2024-11-19 17:24:20.093588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.681 [2024-11-19 17:24:20.093607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.681 2024/11/19 17:24:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.681 [2024-11-19 17:24:20.105588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.681 [2024-11-19 17:24:20.105606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.681 2024/11/19 17:24:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.681 [2024-11-19 17:24:20.117587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.681 [2024-11-19 17:24:20.117605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.681 2024/11/19 17:24:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.681 [2024-11-19 17:24:20.129586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.681 [2024-11-19 17:24:20.129605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.681 2024/11/19 17:24:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.681 [2024-11-19 17:24:20.141586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.681 [2024-11-19 17:24:20.141605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.681 2024/11/19 17:24:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.681 [2024-11-19 17:24:20.153602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.681 [2024-11-19 17:24:20.153627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.681 2024/11/19 17:24:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.681 [2024-11-19 17:24:20.165599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:27.681 [2024-11-19 17:24:20.165620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:27.681 2024/11/19 17:24:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:27.681 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (105137) - No such process 00:25:27.681 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 105137 00:25:27.681 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:27.681 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.681 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:27.681 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.681 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:25:27.681 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.681 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:27.681 delay0 00:25:27.681 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.681 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:25:27.681 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.681 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:27.681 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.681 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:25:27.940 [2024-11-19 17:24:20.357930] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:25:36.088 Initializing NVMe Controllers 00:25:36.088 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:25:36.088 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:36.088 Initialization complete. Launching workers. 00:25:36.088 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 242, failed: 27329 00:25:36.088 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 27448, failed to submit 123 00:25:36.088 success 27381, unsuccessful 67, failed 0 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:36.088 rmmod nvme_tcp 00:25:36.088 rmmod nvme_fabrics 00:25:36.088 rmmod nvme_keyring 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 104987 ']' 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 104987 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 104987 ']' 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 104987 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104987 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:36.088 killing process with pid 104987 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104987' 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 104987 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 104987 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:25:36.088 00:25:36.088 real 0m25.307s 00:25:36.088 user 0m37.275s 00:25:36.088 sys 0m9.803s 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:36.088 ************************************ 00:25:36.088 END TEST nvmf_zcopy 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:36.088 ************************************ 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:36.088 ************************************ 00:25:36.088 START TEST nvmf_nmic 00:25:36.088 ************************************ 00:25:36.088 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:25:36.088 * Looking for test storage... 00:25:36.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:36.088 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:36.088 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:25:36.088 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:36.088 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:36.088 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:36.088 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:36.088 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:36.088 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:36.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.089 --rc genhtml_branch_coverage=1 00:25:36.089 --rc genhtml_function_coverage=1 00:25:36.089 --rc genhtml_legend=1 00:25:36.089 --rc geninfo_all_blocks=1 00:25:36.089 --rc geninfo_unexecuted_blocks=1 00:25:36.089 00:25:36.089 ' 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:36.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.089 --rc genhtml_branch_coverage=1 00:25:36.089 --rc genhtml_function_coverage=1 00:25:36.089 --rc genhtml_legend=1 00:25:36.089 --rc geninfo_all_blocks=1 00:25:36.089 --rc geninfo_unexecuted_blocks=1 00:25:36.089 00:25:36.089 ' 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:36.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.089 --rc genhtml_branch_coverage=1 00:25:36.089 --rc genhtml_function_coverage=1 00:25:36.089 --rc genhtml_legend=1 00:25:36.089 --rc geninfo_all_blocks=1 00:25:36.089 --rc geninfo_unexecuted_blocks=1 00:25:36.089 00:25:36.089 ' 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:36.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.089 --rc genhtml_branch_coverage=1 00:25:36.089 --rc genhtml_function_coverage=1 00:25:36.089 --rc genhtml_legend=1 00:25:36.089 --rc geninfo_all_blocks=1 00:25:36.089 --rc geninfo_unexecuted_blocks=1 00:25:36.089 00:25:36.089 ' 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:36.089 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:36.090 Cannot find device "nvmf_init_br" 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:36.090 Cannot find device "nvmf_init_br2" 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:36.090 Cannot find device "nvmf_tgt_br" 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:36.090 Cannot find device "nvmf_tgt_br2" 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:36.090 Cannot find device "nvmf_init_br" 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:36.090 Cannot find device "nvmf_init_br2" 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:36.090 Cannot find device "nvmf_tgt_br" 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:36.090 Cannot find device "nvmf_tgt_br2" 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:36.090 Cannot find device "nvmf_br" 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:36.090 Cannot find device "nvmf_init_if" 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:36.090 Cannot find device "nvmf_init_if2" 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:36.090 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:36.090 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:36.090 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:36.090 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:25:36.090 00:25:36.090 --- 10.0.0.3 ping statistics --- 00:25:36.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.090 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:25:36.090 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:36.090 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:36.090 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:25:36.090 00:25:36.090 --- 10.0.0.4 ping statistics --- 00:25:36.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.091 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:25:36.091 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:36.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:36.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:25:36.091 00:25:36.091 --- 10.0.0.1 ping statistics --- 00:25:36.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.091 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:25:36.091 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:36.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:36.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:25:36.091 00:25:36.091 --- 10.0.0.2 ping statistics --- 00:25:36.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.091 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:25:36.091 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:36.091 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:25:36.091 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:36.091 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:36.091 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:36.091 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:36.091 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:36.091 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:36.091 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:36.349 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:25:36.349 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:36.349 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:36.349 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:36.349 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=105509 00:25:36.349 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:25:36.350 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 105509 00:25:36.350 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 105509 ']' 00:25:36.350 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.350 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:36.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.350 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.350 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:36.350 17:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:36.350 [2024-11-19 17:24:28.711672] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:36.350 [2024-11-19 17:24:28.713027] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:25:36.350 [2024-11-19 17:24:28.713095] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.350 [2024-11-19 17:24:28.859600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:36.608 [2024-11-19 17:24:28.912599] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.608 [2024-11-19 17:24:28.912669] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.608 [2024-11-19 17:24:28.912680] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:36.608 [2024-11-19 17:24:28.912687] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:36.608 [2024-11-19 17:24:28.912694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.608 [2024-11-19 17:24:28.913996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.608 [2024-11-19 17:24:28.914117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:36.608 [2024-11-19 17:24:28.914274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:36.608 [2024-11-19 17:24:28.914277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.608 [2024-11-19 17:24:29.030893] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:36.608 [2024-11-19 17:24:29.031320] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:25:36.608 [2024-11-19 17:24:29.031551] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:25:36.608 [2024-11-19 17:24:29.031881] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:36.608 [2024-11-19 17:24:29.032173] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:25:36.608 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:36.608 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:25:36.608 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:36.608 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:36.608 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:36.608 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:36.608 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:36.608 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.609 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:36.609 [2024-11-19 17:24:29.119062] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:36.870 Malloc0 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:36.870 [2024-11-19 17:24:29.199254] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.870 test case1: single bdev can't be used in multiple subsystems 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:36.870 [2024-11-19 17:24:29.223002] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:25:36.870 [2024-11-19 17:24:29.223067] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:25:36.870 [2024-11-19 17:24:29.223080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:36.870 2024/11/19 17:24:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:36.870 request: 00:25:36.870 { 00:25:36.870 "method": "nvmf_subsystem_add_ns", 00:25:36.870 "params": { 00:25:36.870 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:25:36.870 "namespace": { 00:25:36.870 "bdev_name": "Malloc0", 00:25:36.870 "no_auto_visible": false 00:25:36.870 } 00:25:36.870 } 00:25:36.870 } 00:25:36.870 Got JSON-RPC error response 00:25:36.870 GoRPCClient: error on JSON-RPC call 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:25:36.870 Adding namespace failed - expected result. 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:25:36.870 test case2: host connect to nvmf target in multiple paths 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:36.870 [2024-11-19 17:24:29.235080] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:36.870 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:25:39.407 17:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:39.407 17:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:39.407 17:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:25:39.407 17:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:39.407 17:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:39.407 17:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:25:39.407 17:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:25:39.407 [global] 00:25:39.407 thread=1 00:25:39.407 invalidate=1 00:25:39.407 rw=write 00:25:39.407 time_based=1 00:25:39.407 runtime=1 00:25:39.407 ioengine=libaio 00:25:39.407 direct=1 00:25:39.407 bs=4096 00:25:39.407 iodepth=1 00:25:39.407 norandommap=0 00:25:39.407 numjobs=1 00:25:39.407 00:25:39.407 verify_dump=1 00:25:39.407 verify_backlog=512 00:25:39.407 verify_state_save=0 00:25:39.407 do_verify=1 00:25:39.407 verify=crc32c-intel 00:25:39.407 [job0] 00:25:39.407 filename=/dev/nvme0n1 00:25:39.407 Could not set queue depth (nvme0n1) 00:25:39.407 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:39.407 fio-3.35 00:25:39.407 Starting 1 thread 00:25:40.344 00:25:40.344 job0: (groupid=0, jobs=1): err= 0: pid=105604: Tue Nov 19 17:24:32 2024 00:25:40.344 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:25:40.344 slat (nsec): min=13163, max=56645, avg=16666.91, stdev=5393.19 00:25:40.344 clat (usec): min=150, max=928, avg=188.99, stdev=28.44 00:25:40.344 lat (usec): min=165, max=942, avg=205.65, stdev=29.40 00:25:40.344 clat percentiles (usec): 00:25:40.344 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:25:40.344 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 192], 00:25:40.344 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 219], 95.00th=[ 229], 00:25:40.344 | 99.00th=[ 258], 99.50th=[ 269], 99.90th=[ 553], 99.95th=[ 594], 00:25:40.344 | 99.99th=[ 930] 00:25:40.344 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(11.9MiB/1001msec); 0 zone resets 00:25:40.344 slat (usec): min=18, max=113, avg=23.98, stdev= 7.59 00:25:40.344 clat (usec): min=97, max=373, avg=128.37, stdev=17.82 00:25:40.344 lat (usec): min=119, max=486, avg=152.35, stdev=20.48 00:25:40.344 clat percentiles (usec): 00:25:40.344 | 1.00th=[ 105], 5.00th=[ 110], 10.00th=[ 112], 20.00th=[ 115], 00:25:40.344 | 30.00th=[ 118], 40.00th=[ 121], 50.00th=[ 124], 60.00th=[ 128], 00:25:40.344 | 70.00th=[ 135], 80.00th=[ 143], 90.00th=[ 153], 95.00th=[ 161], 00:25:40.344 | 99.00th=[ 182], 99.50th=[ 192], 99.90th=[ 217], 99.95th=[ 269], 00:25:40.344 | 99.99th=[ 375] 00:25:40.344 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:25:40.344 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:25:40.344 lat (usec) : 100=0.04%, 250=99.32%, 500=0.59%, 750=0.04%, 1000=0.02% 00:25:40.344 cpu : usr=2.20%, sys=8.00%, ctx=5611, majf=0, minf=5 00:25:40.344 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:40.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.344 issued rwts: total=2560,3051,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.344 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:40.344 00:25:40.344 Run status group 0 (all jobs): 00:25:40.344 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:25:40.344 WRITE: bw=11.9MiB/s (12.5MB/s), 11.9MiB/s-11.9MiB/s (12.5MB/s-12.5MB/s), io=11.9MiB (12.5MB), run=1001-1001msec 00:25:40.344 00:25:40.344 Disk stats (read/write): 00:25:40.344 nvme0n1: ios=2478/2560, merge=0/0, ticks=484/355, in_queue=839, util=91.48% 00:25:40.344 17:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:40.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:25:40.344 17:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:40.344 17:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:25:40.344 17:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:40.344 17:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:40.344 17:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:40.344 17:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:40.344 17:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:25:40.344 17:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:25:40.344 17:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:25:40.344 17:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:40.344 17:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:25:40.603 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:40.603 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:25:40.603 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:40.603 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:40.603 rmmod nvme_tcp 00:25:40.603 rmmod nvme_fabrics 00:25:40.603 rmmod nvme_keyring 00:25:40.603 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:40.603 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:25:40.603 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:25:40.603 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 105509 ']' 00:25:40.603 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 105509 00:25:40.603 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 105509 ']' 00:25:40.603 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 105509 00:25:40.603 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:25:40.603 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:40.603 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105509 00:25:40.603 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:40.603 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:40.603 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105509' 00:25:40.603 killing process with pid 105509 00:25:40.603 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 105509 00:25:40.603 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 105509 00:25:41.171 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:41.171 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:41.171 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:41.171 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:25:41.171 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:25:41.171 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:41.171 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:25:41.171 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:41.171 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:41.171 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:41.171 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:41.171 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:41.171 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:41.171 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:41.171 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:41.171 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:41.171 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:41.171 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:41.171 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:41.171 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:41.171 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:41.171 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:41.171 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:41.171 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.171 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:41.171 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:41.431 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:25:41.431 00:25:41.431 real 0m5.724s 00:25:41.431 user 0m12.886s 00:25:41.431 sys 0m2.477s 00:25:41.431 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:41.431 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:41.431 ************************************ 00:25:41.431 END TEST nvmf_nmic 00:25:41.431 ************************************ 00:25:41.431 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:25:41.431 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:41.431 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:41.431 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:41.431 ************************************ 00:25:41.431 START TEST nvmf_fio_target 00:25:41.431 ************************************ 00:25:41.431 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:25:41.431 * Looking for test storage... 00:25:41.431 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:41.431 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:41.431 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:25:41.431 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:41.431 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:41.431 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:41.431 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:41.431 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:41.431 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:41.431 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:41.431 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:41.431 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:41.431 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:41.431 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:41.431 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:41.431 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:41.431 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:25:41.431 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:25:41.431 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:41.431 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:41.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.432 --rc genhtml_branch_coverage=1 00:25:41.432 --rc genhtml_function_coverage=1 00:25:41.432 --rc genhtml_legend=1 00:25:41.432 --rc geninfo_all_blocks=1 00:25:41.432 --rc geninfo_unexecuted_blocks=1 00:25:41.432 00:25:41.432 ' 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:41.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.432 --rc genhtml_branch_coverage=1 00:25:41.432 --rc genhtml_function_coverage=1 00:25:41.432 --rc genhtml_legend=1 00:25:41.432 --rc geninfo_all_blocks=1 00:25:41.432 --rc geninfo_unexecuted_blocks=1 00:25:41.432 00:25:41.432 ' 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:41.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.432 --rc genhtml_branch_coverage=1 00:25:41.432 --rc genhtml_function_coverage=1 00:25:41.432 --rc genhtml_legend=1 00:25:41.432 --rc geninfo_all_blocks=1 00:25:41.432 --rc geninfo_unexecuted_blocks=1 00:25:41.432 00:25:41.432 ' 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:41.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.432 --rc genhtml_branch_coverage=1 00:25:41.432 --rc genhtml_function_coverage=1 00:25:41.432 --rc genhtml_legend=1 00:25:41.432 --rc geninfo_all_blocks=1 00:25:41.432 --rc geninfo_unexecuted_blocks=1 00:25:41.432 00:25:41.432 ' 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:41.432 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:25:41.433 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:41.692 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:41.692 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:41.692 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:41.693 Cannot find device "nvmf_init_br" 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:41.693 Cannot find device "nvmf_init_br2" 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:25:41.693 17:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:41.693 Cannot find device "nvmf_tgt_br" 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:41.693 Cannot find device "nvmf_tgt_br2" 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:41.693 Cannot find device "nvmf_init_br" 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:41.693 Cannot find device "nvmf_init_br2" 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:41.693 Cannot find device "nvmf_tgt_br" 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:41.693 Cannot find device "nvmf_tgt_br2" 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:41.693 Cannot find device "nvmf_br" 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:41.693 Cannot find device "nvmf_init_if" 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:41.693 Cannot find device "nvmf_init_if2" 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:41.693 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:41.693 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:41.693 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:41.694 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:41.694 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:41.694 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:41.953 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:41.953 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:41.953 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:41.953 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:41.953 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:41.953 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:41.953 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:41.953 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:41.953 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:41.953 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:41.953 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:41.953 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:41.953 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:41.953 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:41.953 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:41.953 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:41.953 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:41.953 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:41.953 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:41.953 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:41.953 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:41.953 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:41.953 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:25:41.953 00:25:41.953 --- 10.0.0.3 ping statistics --- 00:25:41.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.953 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:25:41.953 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:41.953 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:41.953 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:25:41.953 00:25:41.953 --- 10.0.0.4 ping statistics --- 00:25:41.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.953 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:25:41.953 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:41.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:41.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:25:41.953 00:25:41.953 --- 10.0.0.1 ping statistics --- 00:25:41.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.953 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:25:41.953 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:41.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:41.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:25:41.953 00:25:41.953 --- 10.0.0.2 ping statistics --- 00:25:41.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.953 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:25:41.953 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:41.954 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:25:41.954 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:41.954 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:41.954 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:41.954 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:41.954 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:41.954 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:41.954 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:41.954 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:25:41.954 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:41.954 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:41.954 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:25:41.954 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=105840 00:25:41.954 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:25:41.954 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 105840 00:25:41.954 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 105840 ']' 00:25:41.954 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.954 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:41.954 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.954 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:41.954 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:25:41.954 [2024-11-19 17:24:34.474422] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:41.954 [2024-11-19 17:24:34.475927] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:25:41.954 [2024-11-19 17:24:34.476117] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:42.213 [2024-11-19 17:24:34.628961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:42.213 [2024-11-19 17:24:34.701542] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:42.213 [2024-11-19 17:24:34.701604] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:42.213 [2024-11-19 17:24:34.701619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:42.213 [2024-11-19 17:24:34.701630] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:42.213 [2024-11-19 17:24:34.701639] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:42.213 [2024-11-19 17:24:34.703142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.213 [2024-11-19 17:24:34.703340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:42.213 [2024-11-19 17:24:34.703356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.213 [2024-11-19 17:24:34.703214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:42.472 [2024-11-19 17:24:34.823306] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:42.472 [2024-11-19 17:24:34.824218] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:25:42.472 [2024-11-19 17:24:34.824690] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:42.472 [2024-11-19 17:24:34.825067] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:25:42.472 [2024-11-19 17:24:34.826239] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:25:42.472 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:42.472 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:25:42.472 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:42.472 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:42.472 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:25:42.472 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:42.472 17:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:42.731 [2024-11-19 17:24:35.196661] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:42.731 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:43.299 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:25:43.299 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:43.558 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:25:43.558 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:43.817 17:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:25:43.817 17:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:44.076 17:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:25:44.076 17:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:25:44.336 17:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:44.595 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:25:44.595 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:44.855 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:25:44.855 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:45.114 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:25:45.114 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:25:45.373 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:45.632 17:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:25:45.632 17:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:45.891 17:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:25:45.891 17:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:46.150 17:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:46.409 [2024-11-19 17:24:38.860799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:46.409 17:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:25:46.668 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:25:46.927 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:25:46.927 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:25:46.927 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:25:46.927 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:46.927 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:25:46.927 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:25:46.927 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:25:49.460 17:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:49.460 17:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:49.460 17:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:25:49.460 17:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:25:49.460 17:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:49.460 17:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:25:49.460 17:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:25:49.460 [global] 00:25:49.460 thread=1 00:25:49.460 invalidate=1 00:25:49.460 rw=write 00:25:49.460 time_based=1 00:25:49.460 runtime=1 00:25:49.460 ioengine=libaio 00:25:49.460 direct=1 00:25:49.460 bs=4096 00:25:49.460 iodepth=1 00:25:49.460 norandommap=0 00:25:49.460 numjobs=1 00:25:49.460 00:25:49.460 verify_dump=1 00:25:49.460 verify_backlog=512 00:25:49.460 verify_state_save=0 00:25:49.460 do_verify=1 00:25:49.460 verify=crc32c-intel 00:25:49.460 [job0] 00:25:49.460 filename=/dev/nvme0n1 00:25:49.460 [job1] 00:25:49.460 filename=/dev/nvme0n2 00:25:49.460 [job2] 00:25:49.460 filename=/dev/nvme0n3 00:25:49.460 [job3] 00:25:49.460 filename=/dev/nvme0n4 00:25:49.460 Could not set queue depth (nvme0n1) 00:25:49.460 Could not set queue depth (nvme0n2) 00:25:49.460 Could not set queue depth (nvme0n3) 00:25:49.460 Could not set queue depth (nvme0n4) 00:25:49.460 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:49.460 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:49.460 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:49.460 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:49.460 fio-3.35 00:25:49.460 Starting 4 threads 00:25:50.397 00:25:50.397 job0: (groupid=0, jobs=1): err= 0: pid=106120: Tue Nov 19 17:24:42 2024 00:25:50.397 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:25:50.397 slat (nsec): min=10647, max=65565, avg=15911.00, stdev=3969.70 00:25:50.397 clat (usec): min=160, max=4206, avg=240.02, stdev=119.97 00:25:50.397 lat (usec): min=174, max=4221, avg=255.93, stdev=120.02 00:25:50.397 clat percentiles (usec): 00:25:50.397 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 190], 00:25:50.397 | 30.00th=[ 198], 40.00th=[ 208], 50.00th=[ 217], 60.00th=[ 225], 00:25:50.397 | 70.00th=[ 235], 80.00th=[ 249], 90.00th=[ 306], 95.00th=[ 445], 00:25:50.397 | 99.00th=[ 570], 99.50th=[ 603], 99.90th=[ 857], 99.95th=[ 898], 00:25:50.397 | 99.99th=[ 4228] 00:25:50.397 write: IOPS=2302, BW=9211KiB/s (9432kB/s)(9220KiB/1001msec); 0 zone resets 00:25:50.397 slat (nsec): min=16216, max=85366, avg=25098.89, stdev=6658.25 00:25:50.397 clat (usec): min=112, max=1686, avg=178.53, stdev=74.51 00:25:50.397 lat (usec): min=132, max=1706, avg=203.63, stdev=75.71 00:25:50.397 clat percentiles (usec): 00:25:50.397 | 1.00th=[ 118], 5.00th=[ 124], 10.00th=[ 129], 20.00th=[ 135], 00:25:50.397 | 30.00th=[ 143], 40.00th=[ 151], 50.00th=[ 159], 60.00th=[ 169], 00:25:50.397 | 70.00th=[ 180], 80.00th=[ 196], 90.00th=[ 245], 95.00th=[ 338], 00:25:50.397 | 99.00th=[ 437], 99.50th=[ 469], 99.90th=[ 725], 99.95th=[ 750], 00:25:50.397 | 99.99th=[ 1680] 00:25:50.397 bw ( KiB/s): min=11208, max=11208, per=39.81%, avg=11208.00, stdev= 0.00, samples=1 00:25:50.397 iops : min= 2802, max= 2802, avg=2802.00, stdev= 0.00, samples=1 00:25:50.397 lat (usec) : 250=85.57%, 500=13.19%, 750=1.10%, 1000=0.09% 00:25:50.397 lat (msec) : 2=0.02%, 10=0.02% 00:25:50.397 cpu : usr=1.50%, sys=6.60%, ctx=4353, majf=0, minf=7 00:25:50.397 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:50.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.397 issued rwts: total=2048,2305,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.397 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:50.397 job1: (groupid=0, jobs=1): err= 0: pid=106121: Tue Nov 19 17:24:42 2024 00:25:50.397 read: IOPS=1351, BW=5407KiB/s (5536kB/s)(5412KiB/1001msec) 00:25:50.397 slat (nsec): min=15862, max=79637, avg=23294.42, stdev=8608.81 00:25:50.397 clat (usec): min=171, max=2674, avg=375.39, stdev=136.27 00:25:50.397 lat (usec): min=190, max=2707, avg=398.68, stdev=138.93 00:25:50.397 clat percentiles (usec): 00:25:50.397 | 1.00th=[ 190], 5.00th=[ 210], 10.00th=[ 245], 20.00th=[ 302], 00:25:50.397 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 355], 60.00th=[ 379], 00:25:50.397 | 70.00th=[ 412], 80.00th=[ 445], 90.00th=[ 498], 95.00th=[ 562], 00:25:50.397 | 99.00th=[ 709], 99.50th=[ 865], 99.90th=[ 2507], 99.95th=[ 2671], 00:25:50.397 | 99.99th=[ 2671] 00:25:50.397 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:25:50.397 slat (nsec): min=22942, max=99457, avg=34455.17, stdev=7746.98 00:25:50.397 clat (usec): min=143, max=3171, avg=260.87, stdev=87.74 00:25:50.397 lat (usec): min=178, max=3221, avg=295.33, stdev=88.36 00:25:50.397 clat percentiles (usec): 00:25:50.397 | 1.00th=[ 192], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 229], 00:25:50.397 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 258], 00:25:50.397 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 302], 95.00th=[ 334], 00:25:50.397 | 99.00th=[ 453], 99.50th=[ 490], 99.90th=[ 750], 99.95th=[ 3163], 00:25:50.397 | 99.99th=[ 3163] 00:25:50.397 bw ( KiB/s): min= 8192, max= 8192, per=29.10%, avg=8192.00, stdev= 0.00, samples=1 00:25:50.397 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:25:50.397 lat (usec) : 250=30.18%, 500=65.11%, 750=4.29%, 1000=0.24% 00:25:50.397 lat (msec) : 2=0.07%, 4=0.10% 00:25:50.397 cpu : usr=1.70%, sys=6.40%, ctx=2892, majf=0, minf=17 00:25:50.397 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:50.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.397 issued rwts: total=1353,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.397 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:50.397 job2: (groupid=0, jobs=1): err= 0: pid=106122: Tue Nov 19 17:24:42 2024 00:25:50.397 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:25:50.397 slat (nsec): min=12385, max=78995, avg=20501.04, stdev=4716.94 00:25:50.397 clat (usec): min=244, max=1255, avg=317.52, stdev=43.21 00:25:50.397 lat (usec): min=258, max=1277, avg=338.02, stdev=44.22 00:25:50.397 clat percentiles (usec): 00:25:50.397 | 1.00th=[ 262], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 293], 00:25:50.397 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 314], 60.00th=[ 322], 00:25:50.397 | 70.00th=[ 326], 80.00th=[ 338], 90.00th=[ 351], 95.00th=[ 367], 00:25:50.397 | 99.00th=[ 400], 99.50th=[ 453], 99.90th=[ 1020], 99.95th=[ 1254], 00:25:50.397 | 99.99th=[ 1254] 00:25:50.397 write: IOPS=1667, BW=6669KiB/s (6829kB/s)(6676KiB/1001msec); 0 zone resets 00:25:50.397 slat (nsec): min=17531, max=77707, avg=32984.57, stdev=6610.07 00:25:50.397 clat (usec): min=165, max=2905, avg=250.54, stdev=78.51 00:25:50.397 lat (usec): min=196, max=2944, avg=283.53, stdev=79.45 00:25:50.397 clat percentiles (usec): 00:25:50.397 | 1.00th=[ 196], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 231], 00:25:50.397 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 251], 00:25:50.397 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 289], 00:25:50.397 | 99.00th=[ 322], 99.50th=[ 375], 99.90th=[ 1729], 99.95th=[ 2900], 00:25:50.397 | 99.99th=[ 2900] 00:25:50.397 bw ( KiB/s): min= 8192, max= 8192, per=29.10%, avg=8192.00, stdev= 0.00, samples=1 00:25:50.397 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:25:50.397 lat (usec) : 250=30.39%, 500=69.39%, 750=0.06%, 1000=0.03% 00:25:50.397 lat (msec) : 2=0.09%, 4=0.03% 00:25:50.397 cpu : usr=1.30%, sys=6.90%, ctx=3205, majf=0, minf=10 00:25:50.397 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:50.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.397 issued rwts: total=1536,1669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.397 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:50.397 job3: (groupid=0, jobs=1): err= 0: pid=106123: Tue Nov 19 17:24:42 2024 00:25:50.397 read: IOPS=1356, BW=5427KiB/s (5557kB/s)(5432KiB/1001msec) 00:25:50.397 slat (nsec): min=15401, max=66906, avg=22558.03, stdev=5271.70 00:25:50.397 clat (usec): min=184, max=2527, avg=378.90, stdev=123.71 00:25:50.397 lat (usec): min=202, max=2548, avg=401.46, stdev=124.13 00:25:50.397 clat percentiles (usec): 00:25:50.397 | 1.00th=[ 208], 5.00th=[ 237], 10.00th=[ 281], 20.00th=[ 306], 00:25:50.397 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 355], 60.00th=[ 383], 00:25:50.397 | 70.00th=[ 416], 80.00th=[ 445], 90.00th=[ 502], 95.00th=[ 553], 00:25:50.397 | 99.00th=[ 725], 99.50th=[ 832], 99.90th=[ 2147], 99.95th=[ 2540], 00:25:50.397 | 99.99th=[ 2540] 00:25:50.397 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:25:50.397 slat (nsec): min=18591, max=84817, avg=35129.56, stdev=6673.46 00:25:50.397 clat (usec): min=143, max=868, avg=256.55, stdev=44.73 00:25:50.397 lat (usec): min=178, max=911, avg=291.68, stdev=44.79 00:25:50.397 clat percentiles (usec): 00:25:50.398 | 1.00th=[ 188], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 229], 00:25:50.398 | 30.00th=[ 237], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 258], 00:25:50.398 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 322], 00:25:50.398 | 99.00th=[ 412], 99.50th=[ 445], 99.90th=[ 635], 99.95th=[ 865], 00:25:50.398 | 99.99th=[ 865] 00:25:50.398 bw ( KiB/s): min= 8192, max= 8192, per=29.10%, avg=8192.00, stdev= 0.00, samples=1 00:25:50.398 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:25:50.398 lat (usec) : 250=29.58%, 500=65.45%, 750=4.53%, 1000=0.38% 00:25:50.398 lat (msec) : 4=0.07% 00:25:50.398 cpu : usr=2.00%, sys=6.00%, ctx=2894, majf=0, minf=3 00:25:50.398 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:50.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.398 issued rwts: total=1358,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.398 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:50.398 00:25:50.398 Run status group 0 (all jobs): 00:25:50.398 READ: bw=24.6MiB/s (25.8MB/s), 5407KiB/s-8184KiB/s (5536kB/s-8380kB/s), io=24.6MiB (25.8MB), run=1001-1001msec 00:25:50.398 WRITE: bw=27.5MiB/s (28.8MB/s), 6138KiB/s-9211KiB/s (6285kB/s-9432kB/s), io=27.5MiB (28.9MB), run=1001-1001msec 00:25:50.398 00:25:50.398 Disk stats (read/write): 00:25:50.398 nvme0n1: ios=1981/2048, merge=0/0, ticks=484/347, in_queue=831, util=87.58% 00:25:50.398 nvme0n2: ios=1102/1536, merge=0/0, ticks=401/415, in_queue=816, util=87.63% 00:25:50.398 nvme0n3: ios=1182/1536, merge=0/0, ticks=390/407, in_queue=797, util=88.96% 00:25:50.398 nvme0n4: ios=1069/1536, merge=0/0, ticks=383/406, in_queue=789, util=89.62% 00:25:50.398 17:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:25:50.398 [global] 00:25:50.398 thread=1 00:25:50.398 invalidate=1 00:25:50.398 rw=randwrite 00:25:50.398 time_based=1 00:25:50.398 runtime=1 00:25:50.398 ioengine=libaio 00:25:50.398 direct=1 00:25:50.398 bs=4096 00:25:50.398 iodepth=1 00:25:50.398 norandommap=0 00:25:50.398 numjobs=1 00:25:50.398 00:25:50.398 verify_dump=1 00:25:50.398 verify_backlog=512 00:25:50.398 verify_state_save=0 00:25:50.398 do_verify=1 00:25:50.398 verify=crc32c-intel 00:25:50.398 [job0] 00:25:50.398 filename=/dev/nvme0n1 00:25:50.398 [job1] 00:25:50.398 filename=/dev/nvme0n2 00:25:50.398 [job2] 00:25:50.398 filename=/dev/nvme0n3 00:25:50.398 [job3] 00:25:50.398 filename=/dev/nvme0n4 00:25:50.398 Could not set queue depth (nvme0n1) 00:25:50.398 Could not set queue depth (nvme0n2) 00:25:50.398 Could not set queue depth (nvme0n3) 00:25:50.398 Could not set queue depth (nvme0n4) 00:25:50.656 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:50.656 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:50.657 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:50.657 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:50.657 fio-3.35 00:25:50.657 Starting 4 threads 00:25:52.032 00:25:52.032 job0: (groupid=0, jobs=1): err= 0: pid=106182: Tue Nov 19 17:24:44 2024 00:25:52.032 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:25:52.032 slat (nsec): min=6756, max=69194, avg=14446.10, stdev=7202.46 00:25:52.032 clat (usec): min=213, max=843, avg=483.78, stdev=73.51 00:25:52.032 lat (usec): min=223, max=859, avg=498.23, stdev=74.24 00:25:52.032 clat percentiles (usec): 00:25:52.032 | 1.00th=[ 277], 5.00th=[ 347], 10.00th=[ 408], 20.00th=[ 437], 00:25:52.032 | 30.00th=[ 453], 40.00th=[ 469], 50.00th=[ 486], 60.00th=[ 498], 00:25:52.032 | 70.00th=[ 519], 80.00th=[ 537], 90.00th=[ 570], 95.00th=[ 594], 00:25:52.032 | 99.00th=[ 660], 99.50th=[ 725], 99.90th=[ 816], 99.95th=[ 840], 00:25:52.032 | 99.99th=[ 840] 00:25:52.032 write: IOPS=1320, BW=5283KiB/s (5410kB/s)(5288KiB/1001msec); 0 zone resets 00:25:52.032 slat (nsec): min=8101, max=88210, avg=20050.30, stdev=12334.80 00:25:52.032 clat (usec): min=211, max=2426, avg=347.64, stdev=78.82 00:25:52.032 lat (usec): min=225, max=2440, avg=367.69, stdev=77.19 00:25:52.032 clat percentiles (usec): 00:25:52.032 | 1.00th=[ 243], 5.00th=[ 273], 10.00th=[ 285], 20.00th=[ 302], 00:25:52.032 | 30.00th=[ 314], 40.00th=[ 326], 50.00th=[ 338], 60.00th=[ 355], 00:25:52.032 | 70.00th=[ 371], 80.00th=[ 392], 90.00th=[ 416], 95.00th=[ 441], 00:25:52.032 | 99.00th=[ 494], 99.50th=[ 523], 99.90th=[ 701], 99.95th=[ 2442], 00:25:52.032 | 99.99th=[ 2442] 00:25:52.032 bw ( KiB/s): min= 5400, max= 5400, per=22.09%, avg=5400.00, stdev= 0.00, samples=1 00:25:52.032 iops : min= 1350, max= 1350, avg=1350.00, stdev= 0.00, samples=1 00:25:52.032 lat (usec) : 250=0.72%, 500=81.88%, 750=17.22%, 1000=0.13% 00:25:52.032 lat (msec) : 4=0.04% 00:25:52.032 cpu : usr=0.90%, sys=3.30%, ctx=2489, majf=0, minf=9 00:25:52.032 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:52.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.032 issued rwts: total=1024,1322,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.032 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:52.032 job1: (groupid=0, jobs=1): err= 0: pid=106183: Tue Nov 19 17:24:44 2024 00:25:52.032 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:25:52.032 slat (nsec): min=6890, max=82881, avg=14959.31, stdev=7299.22 00:25:52.032 clat (usec): min=187, max=2811, avg=486.64, stdev=145.22 00:25:52.032 lat (usec): min=216, max=2825, avg=501.60, stdev=145.09 00:25:52.032 clat percentiles (usec): 00:25:52.032 | 1.00th=[ 314], 5.00th=[ 351], 10.00th=[ 367], 20.00th=[ 412], 00:25:52.032 | 30.00th=[ 449], 40.00th=[ 465], 50.00th=[ 486], 60.00th=[ 498], 00:25:52.032 | 70.00th=[ 515], 80.00th=[ 537], 90.00th=[ 578], 95.00th=[ 603], 00:25:52.032 | 99.00th=[ 766], 99.50th=[ 938], 99.90th=[ 2507], 99.95th=[ 2802], 00:25:52.032 | 99.99th=[ 2802] 00:25:52.032 write: IOPS=1421, BW=5686KiB/s (5823kB/s)(5692KiB/1001msec); 0 zone resets 00:25:52.032 slat (nsec): min=8316, max=83767, avg=22638.24, stdev=8803.30 00:25:52.032 clat (usec): min=152, max=734, avg=315.93, stdev=57.74 00:25:52.032 lat (usec): min=167, max=754, avg=338.56, stdev=57.00 00:25:52.032 clat percentiles (usec): 00:25:52.032 | 1.00th=[ 204], 5.00th=[ 233], 10.00th=[ 245], 20.00th=[ 269], 00:25:52.032 | 30.00th=[ 285], 40.00th=[ 302], 50.00th=[ 314], 60.00th=[ 326], 00:25:52.032 | 70.00th=[ 338], 80.00th=[ 359], 90.00th=[ 392], 95.00th=[ 412], 00:25:52.032 | 99.00th=[ 465], 99.50th=[ 494], 99.90th=[ 644], 99.95th=[ 734], 00:25:52.032 | 99.99th=[ 734] 00:25:52.032 bw ( KiB/s): min= 6192, max= 6192, per=25.33%, avg=6192.00, stdev= 0.00, samples=1 00:25:52.032 iops : min= 1548, max= 1548, avg=1548.00, stdev= 0.00, samples=1 00:25:52.032 lat (usec) : 250=7.23%, 500=76.22%, 750=16.06%, 1000=0.29% 00:25:52.032 lat (msec) : 2=0.08%, 4=0.12% 00:25:52.032 cpu : usr=1.30%, sys=3.50%, ctx=2491, majf=0, minf=13 00:25:52.032 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:52.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.032 issued rwts: total=1024,1423,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.032 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:52.032 job2: (groupid=0, jobs=1): err= 0: pid=106184: Tue Nov 19 17:24:44 2024 00:25:52.032 read: IOPS=1792, BW=7169KiB/s (7341kB/s)(7176KiB/1001msec) 00:25:52.032 slat (nsec): min=12278, max=90163, avg=17026.73, stdev=6001.61 00:25:52.032 clat (usec): min=165, max=3431, avg=267.05, stdev=112.14 00:25:52.032 lat (usec): min=179, max=3447, avg=284.08, stdev=112.30 00:25:52.032 clat percentiles (usec): 00:25:52.032 | 1.00th=[ 176], 5.00th=[ 192], 10.00th=[ 206], 20.00th=[ 219], 00:25:52.032 | 30.00th=[ 229], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 260], 00:25:52.032 | 70.00th=[ 285], 80.00th=[ 314], 90.00th=[ 351], 95.00th=[ 375], 00:25:52.032 | 99.00th=[ 424], 99.50th=[ 498], 99.90th=[ 2507], 99.95th=[ 3425], 00:25:52.032 | 99.99th=[ 3425] 00:25:52.032 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:25:52.032 slat (nsec): min=8954, max=71552, avg=23824.92, stdev=6753.26 00:25:52.032 clat (usec): min=117, max=1889, avg=212.12, stdev=71.88 00:25:52.032 lat (usec): min=138, max=1913, avg=235.95, stdev=72.91 00:25:52.032 clat percentiles (usec): 00:25:52.032 | 1.00th=[ 126], 5.00th=[ 137], 10.00th=[ 149], 20.00th=[ 163], 00:25:52.032 | 30.00th=[ 172], 40.00th=[ 182], 50.00th=[ 190], 60.00th=[ 204], 00:25:52.032 | 70.00th=[ 235], 80.00th=[ 273], 90.00th=[ 306], 95.00th=[ 326], 00:25:52.032 | 99.00th=[ 363], 99.50th=[ 383], 99.90th=[ 502], 99.95th=[ 840], 00:25:52.032 | 99.99th=[ 1893] 00:25:52.032 bw ( KiB/s): min= 8208, max= 8208, per=33.57%, avg=8208.00, stdev= 0.00, samples=1 00:25:52.032 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:25:52.032 lat (usec) : 250=63.87%, 500=35.84%, 750=0.13%, 1000=0.05% 00:25:52.032 lat (msec) : 2=0.05%, 4=0.05% 00:25:52.032 cpu : usr=1.20%, sys=6.40%, ctx=3856, majf=0, minf=13 00:25:52.033 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:52.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.033 issued rwts: total=1794,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.033 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:52.033 job3: (groupid=0, jobs=1): err= 0: pid=106185: Tue Nov 19 17:24:44 2024 00:25:52.033 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:25:52.033 slat (nsec): min=7097, max=66340, avg=15026.52, stdev=7104.86 00:25:52.033 clat (usec): min=232, max=883, avg=482.78, stdev=72.02 00:25:52.033 lat (usec): min=244, max=898, avg=497.81, stdev=72.40 00:25:52.033 clat percentiles (usec): 00:25:52.033 | 1.00th=[ 289], 5.00th=[ 351], 10.00th=[ 404], 20.00th=[ 437], 00:25:52.033 | 30.00th=[ 457], 40.00th=[ 469], 50.00th=[ 486], 60.00th=[ 498], 00:25:52.033 | 70.00th=[ 515], 80.00th=[ 537], 90.00th=[ 562], 95.00th=[ 586], 00:25:52.033 | 99.00th=[ 668], 99.50th=[ 701], 99.90th=[ 807], 99.95th=[ 881], 00:25:52.033 | 99.99th=[ 881] 00:25:52.033 write: IOPS=1323, BW=5295KiB/s (5422kB/s)(5300KiB/1001msec); 0 zone resets 00:25:52.033 slat (usec): min=8, max=100, avg=22.29, stdev=11.57 00:25:52.033 clat (usec): min=181, max=2479, avg=344.51, stdev=79.20 00:25:52.033 lat (usec): min=195, max=2498, avg=366.79, stdev=77.93 00:25:52.033 clat percentiles (usec): 00:25:52.033 | 1.00th=[ 237], 5.00th=[ 273], 10.00th=[ 285], 20.00th=[ 297], 00:25:52.033 | 30.00th=[ 310], 40.00th=[ 322], 50.00th=[ 334], 60.00th=[ 351], 00:25:52.033 | 70.00th=[ 371], 80.00th=[ 388], 90.00th=[ 412], 95.00th=[ 437], 00:25:52.033 | 99.00th=[ 498], 99.50th=[ 529], 99.90th=[ 594], 99.95th=[ 2474], 00:25:52.033 | 99.99th=[ 2474] 00:25:52.033 bw ( KiB/s): min= 5392, max= 5392, per=22.06%, avg=5392.00, stdev= 0.00, samples=1 00:25:52.033 iops : min= 1348, max= 1348, avg=1348.00, stdev= 0.00, samples=1 00:25:52.033 lat (usec) : 250=0.89%, 500=82.33%, 750=16.56%, 1000=0.17% 00:25:52.033 lat (msec) : 4=0.04% 00:25:52.033 cpu : usr=0.50%, sys=4.10%, ctx=2503, majf=0, minf=11 00:25:52.033 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:52.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.033 issued rwts: total=1024,1325,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.033 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:52.033 00:25:52.033 Run status group 0 (all jobs): 00:25:52.033 READ: bw=19.0MiB/s (19.9MB/s), 4092KiB/s-7169KiB/s (4190kB/s-7341kB/s), io=19.0MiB (19.9MB), run=1001-1001msec 00:25:52.033 WRITE: bw=23.9MiB/s (25.0MB/s), 5283KiB/s-8184KiB/s (5410kB/s-8380kB/s), io=23.9MiB (25.1MB), run=1001-1001msec 00:25:52.033 00:25:52.033 Disk stats (read/write): 00:25:52.033 nvme0n1: ios=987/1024, merge=0/0, ticks=453/351, in_queue=804, util=87.98% 00:25:52.033 nvme0n2: ios=1060/1024, merge=0/0, ticks=460/338, in_queue=798, util=88.46% 00:25:52.033 nvme0n3: ios=1536/1876, merge=0/0, ticks=401/408, in_queue=809, util=88.85% 00:25:52.033 nvme0n4: ios=939/1024, merge=0/0, ticks=425/358, in_queue=783, util=89.51% 00:25:52.033 17:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:25:52.033 [global] 00:25:52.033 thread=1 00:25:52.033 invalidate=1 00:25:52.033 rw=write 00:25:52.033 time_based=1 00:25:52.033 runtime=1 00:25:52.033 ioengine=libaio 00:25:52.033 direct=1 00:25:52.033 bs=4096 00:25:52.033 iodepth=128 00:25:52.033 norandommap=0 00:25:52.033 numjobs=1 00:25:52.033 00:25:52.033 verify_dump=1 00:25:52.033 verify_backlog=512 00:25:52.033 verify_state_save=0 00:25:52.033 do_verify=1 00:25:52.033 verify=crc32c-intel 00:25:52.033 [job0] 00:25:52.033 filename=/dev/nvme0n1 00:25:52.033 [job1] 00:25:52.033 filename=/dev/nvme0n2 00:25:52.033 [job2] 00:25:52.033 filename=/dev/nvme0n3 00:25:52.033 [job3] 00:25:52.033 filename=/dev/nvme0n4 00:25:52.033 Could not set queue depth (nvme0n1) 00:25:52.033 Could not set queue depth (nvme0n2) 00:25:52.033 Could not set queue depth (nvme0n3) 00:25:52.033 Could not set queue depth (nvme0n4) 00:25:52.033 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:52.033 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:52.033 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:52.033 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:52.033 fio-3.35 00:25:52.033 Starting 4 threads 00:25:53.408 00:25:53.408 job0: (groupid=0, jobs=1): err= 0: pid=106241: Tue Nov 19 17:24:45 2024 00:25:53.408 read: IOPS=2640, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1003msec) 00:25:53.408 slat (usec): min=3, max=9098, avg=170.92, stdev=804.22 00:25:53.408 clat (usec): min=1880, max=44178, avg=20701.96, stdev=4965.52 00:25:53.408 lat (usec): min=5059, max=44211, avg=20872.88, stdev=5020.90 00:25:53.408 clat percentiles (usec): 00:25:53.408 | 1.00th=[11469], 5.00th=[14877], 10.00th=[15664], 20.00th=[16319], 00:25:53.408 | 30.00th=[16712], 40.00th=[17695], 50.00th=[20055], 60.00th=[21890], 00:25:53.408 | 70.00th=[23462], 80.00th=[25560], 90.00th=[27132], 95.00th=[29230], 00:25:53.408 | 99.00th=[32900], 99.50th=[37487], 99.90th=[44303], 99.95th=[44303], 00:25:53.408 | 99.99th=[44303] 00:25:53.408 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:25:53.408 slat (usec): min=4, max=7451, avg=171.99, stdev=740.07 00:25:53.408 clat (usec): min=11700, max=71759, avg=23365.44, stdev=15246.31 00:25:53.408 lat (usec): min=11717, max=71798, avg=23537.42, stdev=15365.03 00:25:53.408 clat percentiles (usec): 00:25:53.408 | 1.00th=[11994], 5.00th=[13435], 10.00th=[13698], 20.00th=[14353], 00:25:53.408 | 30.00th=[15401], 40.00th=[16581], 50.00th=[17695], 60.00th=[18744], 00:25:53.408 | 70.00th=[20841], 80.00th=[23987], 90.00th=[57410], 95.00th=[64750], 00:25:53.408 | 99.00th=[67634], 99.50th=[68682], 99.90th=[69731], 99.95th=[71828], 00:25:53.408 | 99.99th=[71828] 00:25:53.408 bw ( KiB/s): min=11968, max=12312, per=29.79%, avg=12140.00, stdev=243.24, samples=2 00:25:53.408 iops : min= 2992, max= 3078, avg=3035.00, stdev=60.81, samples=2 00:25:53.408 lat (msec) : 2=0.02%, 10=0.24%, 20=57.12%, 50=36.49%, 100=6.14% 00:25:53.408 cpu : usr=2.79%, sys=7.09%, ctx=523, majf=0, minf=11 00:25:53.408 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:53.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:53.408 issued rwts: total=2648,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.408 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.408 job1: (groupid=0, jobs=1): err= 0: pid=106242: Tue Nov 19 17:24:45 2024 00:25:53.408 read: IOPS=1983, BW=7932KiB/s (8123kB/s)(7972KiB/1005msec) 00:25:53.408 slat (usec): min=5, max=10729, avg=208.97, stdev=1020.55 00:25:53.408 clat (usec): min=4460, max=84023, avg=27136.84, stdev=12344.68 00:25:53.408 lat (usec): min=4472, max=87586, avg=27345.81, stdev=12374.04 00:25:53.408 clat percentiles (usec): 00:25:53.408 | 1.00th=[ 7832], 5.00th=[15664], 10.00th=[16712], 20.00th=[17171], 00:25:53.408 | 30.00th=[17433], 40.00th=[17957], 50.00th=[25297], 60.00th=[29754], 00:25:53.408 | 70.00th=[34341], 80.00th=[35914], 90.00th=[42730], 95.00th=[45351], 00:25:53.408 | 99.00th=[74974], 99.50th=[81265], 99.90th=[84411], 99.95th=[84411], 00:25:53.408 | 99.99th=[84411] 00:25:53.408 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:25:53.408 slat (usec): min=11, max=19254, avg=277.24, stdev=1275.12 00:25:53.408 clat (msec): min=11, max=106, avg=35.30, stdev=25.10 00:25:53.408 lat (msec): min=14, max=106, avg=35.57, stdev=25.26 00:25:53.408 clat percentiles (msec): 00:25:53.408 | 1.00th=[ 15], 5.00th=[ 16], 10.00th=[ 16], 20.00th=[ 16], 00:25:53.408 | 30.00th=[ 24], 40.00th=[ 26], 50.00th=[ 27], 60.00th=[ 29], 00:25:53.408 | 70.00th=[ 31], 80.00th=[ 40], 90.00th=[ 85], 95.00th=[ 96], 00:25:53.408 | 99.00th=[ 105], 99.50th=[ 105], 99.90th=[ 107], 99.95th=[ 107], 00:25:53.408 | 99.99th=[ 107] 00:25:53.408 bw ( KiB/s): min= 8192, max= 8192, per=20.10%, avg=8192.00, stdev= 0.00, samples=2 00:25:53.408 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:25:53.408 lat (msec) : 10=0.49%, 20=34.64%, 50=54.74%, 100=9.23%, 250=0.89% 00:25:53.408 cpu : usr=2.09%, sys=5.78%, ctx=404, majf=0, minf=5 00:25:53.408 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:53.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:53.408 issued rwts: total=1993,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.408 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.408 job2: (groupid=0, jobs=1): err= 0: pid=106243: Tue Nov 19 17:24:45 2024 00:25:53.408 read: IOPS=2589, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1005msec) 00:25:53.408 slat (usec): min=3, max=8865, avg=165.87, stdev=739.94 00:25:53.408 clat (usec): min=1731, max=31947, avg=20752.29, stdev=4632.68 00:25:53.408 lat (usec): min=7818, max=31980, avg=20918.16, stdev=4664.80 00:25:53.408 clat percentiles (usec): 00:25:53.408 | 1.00th=[13304], 5.00th=[15533], 10.00th=[15926], 20.00th=[16319], 00:25:53.408 | 30.00th=[16909], 40.00th=[18220], 50.00th=[19792], 60.00th=[21890], 00:25:53.408 | 70.00th=[24511], 80.00th=[25560], 90.00th=[27132], 95.00th=[28705], 00:25:53.408 | 99.00th=[30278], 99.50th=[30802], 99.90th=[31851], 99.95th=[31851], 00:25:53.408 | 99.99th=[31851] 00:25:53.408 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:25:53.408 slat (usec): min=10, max=7305, avg=179.10, stdev=728.95 00:25:53.408 clat (usec): min=10997, max=69911, avg=23669.90, stdev=14989.57 00:25:53.408 lat (usec): min=11011, max=70454, avg=23849.00, stdev=15108.04 00:25:53.408 clat percentiles (usec): 00:25:53.408 | 1.00th=[11994], 5.00th=[13304], 10.00th=[13829], 20.00th=[14353], 00:25:53.408 | 30.00th=[15664], 40.00th=[17171], 50.00th=[18220], 60.00th=[19792], 00:25:53.408 | 70.00th=[21627], 80.00th=[24511], 90.00th=[57934], 95.00th=[63701], 00:25:53.408 | 99.00th=[66847], 99.50th=[67634], 99.90th=[68682], 99.95th=[68682], 00:25:53.408 | 99.99th=[69731] 00:25:53.408 bw ( KiB/s): min=11600, max=12312, per=29.34%, avg=11956.00, stdev=503.46, samples=2 00:25:53.408 iops : min= 2900, max= 3078, avg=2989.00, stdev=125.87, samples=2 00:25:53.408 lat (msec) : 2=0.02%, 10=0.14%, 20=56.27%, 50=37.49%, 100=6.08% 00:25:53.408 cpu : usr=3.19%, sys=7.07%, ctx=587, majf=0, minf=16 00:25:53.408 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:53.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:53.408 issued rwts: total=2602,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.408 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.408 job3: (groupid=0, jobs=1): err= 0: pid=106244: Tue Nov 19 17:24:45 2024 00:25:53.408 read: IOPS=1975, BW=7900KiB/s (8090kB/s)(7940KiB/1005msec) 00:25:53.408 slat (usec): min=2, max=10710, avg=206.56, stdev=1037.19 00:25:53.408 clat (usec): min=1699, max=86437, avg=27351.47, stdev=13069.44 00:25:53.408 lat (usec): min=5193, max=88480, avg=27558.02, stdev=13086.66 00:25:53.408 clat percentiles (usec): 00:25:53.408 | 1.00th=[ 8291], 5.00th=[15533], 10.00th=[16909], 20.00th=[17171], 00:25:53.408 | 30.00th=[17433], 40.00th=[18220], 50.00th=[23987], 60.00th=[29230], 00:25:53.408 | 70.00th=[33424], 80.00th=[35914], 90.00th=[42730], 95.00th=[47449], 00:25:53.408 | 99.00th=[84411], 99.50th=[85459], 99.90th=[86508], 99.95th=[86508], 00:25:53.408 | 99.99th=[86508] 00:25:53.408 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:25:53.408 slat (usec): min=11, max=18248, avg=281.42, stdev=1302.13 00:25:53.408 clat (msec): min=11, max=105, avg=34.86, stdev=25.17 00:25:53.408 lat (msec): min=13, max=105, avg=35.14, stdev=25.34 00:25:53.408 clat percentiles (msec): 00:25:53.408 | 1.00th=[ 15], 5.00th=[ 16], 10.00th=[ 16], 20.00th=[ 17], 00:25:53.409 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 26], 60.00th=[ 27], 00:25:53.409 | 70.00th=[ 30], 80.00th=[ 42], 90.00th=[ 89], 95.00th=[ 96], 00:25:53.409 | 99.00th=[ 103], 99.50th=[ 105], 99.90th=[ 106], 99.95th=[ 106], 00:25:53.409 | 99.99th=[ 106] 00:25:53.409 bw ( KiB/s): min= 8192, max= 8192, per=20.10%, avg=8192.00, stdev= 0.00, samples=2 00:25:53.409 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:25:53.409 lat (msec) : 2=0.02%, 10=0.79%, 20=34.64%, 50=54.25%, 100=9.22% 00:25:53.409 lat (msec) : 250=1.07% 00:25:53.409 cpu : usr=1.99%, sys=5.98%, ctx=382, majf=0, minf=21 00:25:53.409 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:53.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:53.409 issued rwts: total=1985,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.409 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.409 00:25:53.409 Run status group 0 (all jobs): 00:25:53.409 READ: bw=35.9MiB/s (37.6MB/s), 7900KiB/s-10.3MiB/s (8090kB/s-10.8MB/s), io=36.0MiB (37.8MB), run=1003-1005msec 00:25:53.409 WRITE: bw=39.8MiB/s (41.7MB/s), 8151KiB/s-12.0MiB/s (8347kB/s-12.5MB/s), io=40.0MiB (41.9MB), run=1003-1005msec 00:25:53.409 00:25:53.409 Disk stats (read/write): 00:25:53.409 nvme0n1: ios=2610/2842, merge=0/0, ticks=16866/15591, in_queue=32457, util=88.68% 00:25:53.409 nvme0n2: ios=1521/1536, merge=0/0, ticks=10218/15243, in_queue=25461, util=89.00% 00:25:53.409 nvme0n3: ios=2581/2797, merge=0/0, ticks=16899/15991, in_queue=32890, util=93.31% 00:25:53.409 nvme0n4: ios=1472/1536, merge=0/0, ticks=10475/15689, in_queue=26164, util=89.46% 00:25:53.409 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:25:53.409 [global] 00:25:53.409 thread=1 00:25:53.409 invalidate=1 00:25:53.409 rw=randwrite 00:25:53.409 time_based=1 00:25:53.409 runtime=1 00:25:53.409 ioengine=libaio 00:25:53.409 direct=1 00:25:53.409 bs=4096 00:25:53.409 iodepth=128 00:25:53.409 norandommap=0 00:25:53.409 numjobs=1 00:25:53.409 00:25:53.409 verify_dump=1 00:25:53.409 verify_backlog=512 00:25:53.409 verify_state_save=0 00:25:53.409 do_verify=1 00:25:53.409 verify=crc32c-intel 00:25:53.409 [job0] 00:25:53.409 filename=/dev/nvme0n1 00:25:53.409 [job1] 00:25:53.409 filename=/dev/nvme0n2 00:25:53.409 [job2] 00:25:53.409 filename=/dev/nvme0n3 00:25:53.409 [job3] 00:25:53.409 filename=/dev/nvme0n4 00:25:53.409 Could not set queue depth (nvme0n1) 00:25:53.409 Could not set queue depth (nvme0n2) 00:25:53.409 Could not set queue depth (nvme0n3) 00:25:53.409 Could not set queue depth (nvme0n4) 00:25:53.409 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:53.409 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:53.409 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:53.409 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:53.409 fio-3.35 00:25:53.409 Starting 4 threads 00:25:54.789 00:25:54.789 job0: (groupid=0, jobs=1): err= 0: pid=106297: Tue Nov 19 17:24:46 2024 00:25:54.789 read: IOPS=3797, BW=14.8MiB/s (15.6MB/s)(14.9MiB/1003msec) 00:25:54.789 slat (usec): min=8, max=4970, avg=125.85, stdev=618.94 00:25:54.789 clat (usec): min=467, max=23884, avg=16325.64, stdev=2859.81 00:25:54.789 lat (usec): min=3503, max=23992, avg=16451.48, stdev=2822.14 00:25:54.789 clat percentiles (usec): 00:25:54.789 | 1.00th=[ 7439], 5.00th=[12125], 10.00th=[14091], 20.00th=[14484], 00:25:54.789 | 30.00th=[14746], 40.00th=[14877], 50.00th=[15533], 60.00th=[16319], 00:25:54.789 | 70.00th=[18744], 80.00th=[19268], 90.00th=[19792], 95.00th=[20317], 00:25:54.789 | 99.00th=[21103], 99.50th=[21365], 99.90th=[23200], 99.95th=[23987], 00:25:54.789 | 99.99th=[23987] 00:25:54.789 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:25:54.789 slat (usec): min=10, max=4524, avg=120.00, stdev=563.02 00:25:54.789 clat (usec): min=10721, max=22625, avg=15719.71, stdev=2670.60 00:25:54.789 lat (usec): min=10743, max=22648, avg=15839.72, stdev=2662.95 00:25:54.789 clat percentiles (usec): 00:25:54.789 | 1.00th=[11207], 5.00th=[11863], 10.00th=[12256], 20.00th=[13042], 00:25:54.789 | 30.00th=[14222], 40.00th=[14877], 50.00th=[15270], 60.00th=[15795], 00:25:54.789 | 70.00th=[17957], 80.00th=[19006], 90.00th=[19530], 95.00th=[19792], 00:25:54.789 | 99.00th=[20579], 99.50th=[20579], 99.90th=[22676], 99.95th=[22676], 00:25:54.789 | 99.99th=[22676] 00:25:54.789 bw ( KiB/s): min=14408, max=18360, per=31.35%, avg=16384.00, stdev=2794.49, samples=2 00:25:54.789 iops : min= 3602, max= 4590, avg=4096.00, stdev=698.62, samples=2 00:25:54.789 lat (usec) : 500=0.01% 00:25:54.789 lat (msec) : 4=0.38%, 10=0.43%, 20=93.75%, 50=5.43% 00:25:54.789 cpu : usr=3.19%, sys=11.18%, ctx=347, majf=0, minf=11 00:25:54.789 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:54.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:54.789 issued rwts: total=3809,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:54.789 job1: (groupid=0, jobs=1): err= 0: pid=106298: Tue Nov 19 17:24:46 2024 00:25:54.789 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:25:54.789 slat (usec): min=7, max=11169, avg=199.20, stdev=1038.30 00:25:54.789 clat (usec): min=11649, max=54685, avg=25261.85, stdev=10930.77 00:25:54.789 lat (usec): min=11671, max=56347, avg=25461.06, stdev=11023.26 00:25:54.789 clat percentiles (usec): 00:25:54.789 | 1.00th=[13435], 5.00th=[15664], 10.00th=[15795], 20.00th=[16450], 00:25:54.789 | 30.00th=[16909], 40.00th=[17433], 50.00th=[17957], 60.00th=[23462], 00:25:54.789 | 70.00th=[33162], 80.00th=[37487], 90.00th=[41681], 95.00th=[44827], 00:25:54.789 | 99.00th=[49546], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:25:54.789 | 99.99th=[54789] 00:25:54.789 write: IOPS=2877, BW=11.2MiB/s (11.8MB/s)(11.3MiB/1004msec); 0 zone resets 00:25:54.789 slat (usec): min=6, max=8363, avg=161.92, stdev=746.75 00:25:54.789 clat (usec): min=3817, max=40077, avg=21439.56, stdev=7110.81 00:25:54.789 lat (usec): min=3843, max=40091, avg=21601.48, stdev=7153.55 00:25:54.789 clat percentiles (usec): 00:25:54.789 | 1.00th=[10290], 5.00th=[14353], 10.00th=[16057], 20.00th=[16581], 00:25:54.789 | 30.00th=[17171], 40.00th=[17695], 50.00th=[18220], 60.00th=[18482], 00:25:54.789 | 70.00th=[22938], 80.00th=[29754], 90.00th=[32900], 95.00th=[35914], 00:25:54.789 | 99.00th=[38536], 99.50th=[38536], 99.90th=[39060], 99.95th=[39060], 00:25:54.789 | 99.99th=[40109] 00:25:54.789 bw ( KiB/s): min= 6208, max=15888, per=21.14%, avg=11048.00, stdev=6844.79, samples=2 00:25:54.789 iops : min= 1552, max= 3972, avg=2762.00, stdev=1711.20, samples=2 00:25:54.789 lat (msec) : 4=0.06%, 10=0.33%, 20=62.29%, 50=37.05%, 100=0.28% 00:25:54.789 cpu : usr=2.79%, sys=7.88%, ctx=397, majf=0, minf=9 00:25:54.789 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:54.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:54.789 issued rwts: total=2560,2889,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:54.789 job2: (groupid=0, jobs=1): err= 0: pid=106299: Tue Nov 19 17:24:46 2024 00:25:54.789 read: IOPS=2521, BW=9.85MiB/s (10.3MB/s)(9.90MiB/1005msec) 00:25:54.789 slat (usec): min=4, max=10904, avg=204.73, stdev=1020.14 00:25:54.789 clat (usec): min=339, max=43371, avg=25633.32, stdev=8625.47 00:25:54.789 lat (usec): min=4238, max=43393, avg=25838.05, stdev=8638.87 00:25:54.789 clat percentiles (usec): 00:25:54.789 | 1.00th=[ 4752], 5.00th=[16581], 10.00th=[18482], 20.00th=[19268], 00:25:54.789 | 30.00th=[19792], 40.00th=[20317], 50.00th=[20841], 60.00th=[23200], 00:25:54.789 | 70.00th=[33817], 80.00th=[36963], 90.00th=[38011], 95.00th=[39060], 00:25:54.789 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:25:54.789 | 99.99th=[43254] 00:25:54.789 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:25:54.789 slat (usec): min=12, max=8116, avg=180.84, stdev=768.68 00:25:54.790 clat (usec): min=15084, max=39200, avg=24067.55, stdev=6539.76 00:25:54.790 lat (usec): min=15108, max=40117, avg=24248.39, stdev=6577.94 00:25:54.790 clat percentiles (usec): 00:25:54.790 | 1.00th=[15270], 5.00th=[15926], 10.00th=[16909], 20.00th=[18482], 00:25:54.790 | 30.00th=[19530], 40.00th=[20055], 50.00th=[21627], 60.00th=[24249], 00:25:54.790 | 70.00th=[27919], 80.00th=[31589], 90.00th=[33817], 95.00th=[36439], 00:25:54.790 | 99.00th=[38011], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:25:54.790 | 99.99th=[39060] 00:25:54.790 bw ( KiB/s): min= 8168, max=12312, per=19.60%, avg=10240.00, stdev=2930.25, samples=2 00:25:54.790 iops : min= 2042, max= 3078, avg=2560.00, stdev=732.56, samples=2 00:25:54.790 lat (usec) : 500=0.02% 00:25:54.790 lat (msec) : 10=0.63%, 20=34.61%, 50=64.74% 00:25:54.790 cpu : usr=2.09%, sys=8.67%, ctx=466, majf=0, minf=13 00:25:54.790 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:54.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:54.790 issued rwts: total=2534,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:54.790 job3: (groupid=0, jobs=1): err= 0: pid=106300: Tue Nov 19 17:24:46 2024 00:25:54.790 read: IOPS=3295, BW=12.9MiB/s (13.5MB/s)(12.9MiB/1002msec) 00:25:54.790 slat (usec): min=7, max=6387, avg=145.11, stdev=713.82 00:25:54.790 clat (usec): min=623, max=24666, avg=18453.90, stdev=3366.90 00:25:54.790 lat (usec): min=3663, max=27820, avg=18599.01, stdev=3317.53 00:25:54.790 clat percentiles (usec): 00:25:54.790 | 1.00th=[ 7439], 5.00th=[13698], 10.00th=[15533], 20.00th=[16188], 00:25:54.790 | 30.00th=[16712], 40.00th=[17433], 50.00th=[17695], 60.00th=[19006], 00:25:54.790 | 70.00th=[21365], 80.00th=[21627], 90.00th=[22414], 95.00th=[22938], 00:25:54.790 | 99.00th=[24511], 99.50th=[24511], 99.90th=[24773], 99.95th=[24773], 00:25:54.790 | 99.99th=[24773] 00:25:54.790 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:25:54.790 slat (usec): min=10, max=5262, avg=137.17, stdev=593.41 00:25:54.790 clat (usec): min=12184, max=24637, avg=18165.71, stdev=2721.24 00:25:54.790 lat (usec): min=12224, max=24670, avg=18302.88, stdev=2712.19 00:25:54.790 clat percentiles (usec): 00:25:54.790 | 1.00th=[12518], 5.00th=[14222], 10.00th=[15008], 20.00th=[15926], 00:25:54.790 | 30.00th=[16450], 40.00th=[16909], 50.00th=[17695], 60.00th=[18220], 00:25:54.790 | 70.00th=[19268], 80.00th=[21103], 90.00th=[22414], 95.00th=[22938], 00:25:54.790 | 99.00th=[24249], 99.50th=[24511], 99.90th=[24511], 99.95th=[24511], 00:25:54.790 | 99.99th=[24511] 00:25:54.790 bw ( KiB/s): min=12312, max=16384, per=27.46%, avg=14348.00, stdev=2879.34, samples=2 00:25:54.790 iops : min= 3078, max= 4096, avg=3587.00, stdev=719.83, samples=2 00:25:54.790 lat (usec) : 750=0.01% 00:25:54.790 lat (msec) : 4=0.26%, 10=0.67%, 20=66.89%, 50=32.17% 00:25:54.790 cpu : usr=2.70%, sys=11.29%, ctx=368, majf=0, minf=19 00:25:54.790 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:54.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:54.790 issued rwts: total=3302,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:54.790 00:25:54.790 Run status group 0 (all jobs): 00:25:54.790 READ: bw=47.4MiB/s (49.7MB/s), 9.85MiB/s-14.8MiB/s (10.3MB/s-15.6MB/s), io=47.7MiB (50.0MB), run=1002-1005msec 00:25:54.790 WRITE: bw=51.0MiB/s (53.5MB/s), 9.95MiB/s-16.0MiB/s (10.4MB/s-16.7MB/s), io=51.3MiB (53.8MB), run=1002-1005msec 00:25:54.790 00:25:54.790 Disk stats (read/write): 00:25:54.790 nvme0n1: ios=3122/3562, merge=0/0, ticks=12089/12750, in_queue=24839, util=87.88% 00:25:54.790 nvme0n2: ios=2377/2560, merge=0/0, ticks=15459/14145, in_queue=29604, util=88.47% 00:25:54.790 nvme0n3: ios=2048/2507, merge=0/0, ticks=11532/13671, in_queue=25203, util=89.21% 00:25:54.790 nvme0n4: ios=2687/3072, merge=0/0, ticks=12215/12943, in_queue=25158, util=89.57% 00:25:54.790 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:25:54.790 17:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=106323 00:25:54.790 17:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:25:54.790 17:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:25:54.790 [global] 00:25:54.790 thread=1 00:25:54.790 invalidate=1 00:25:54.790 rw=read 00:25:54.790 time_based=1 00:25:54.790 runtime=10 00:25:54.790 ioengine=libaio 00:25:54.790 direct=1 00:25:54.790 bs=4096 00:25:54.790 iodepth=1 00:25:54.790 norandommap=1 00:25:54.790 numjobs=1 00:25:54.790 00:25:54.790 [job0] 00:25:54.790 filename=/dev/nvme0n1 00:25:54.790 [job1] 00:25:54.790 filename=/dev/nvme0n2 00:25:54.790 [job2] 00:25:54.790 filename=/dev/nvme0n3 00:25:54.790 [job3] 00:25:54.790 filename=/dev/nvme0n4 00:25:54.790 Could not set queue depth (nvme0n1) 00:25:54.790 Could not set queue depth (nvme0n2) 00:25:54.790 Could not set queue depth (nvme0n3) 00:25:54.790 Could not set queue depth (nvme0n4) 00:25:55.049 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:55.049 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:55.049 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:55.049 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:55.049 fio-3.35 00:25:55.049 Starting 4 threads 00:25:58.330 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:25:58.330 fio: pid=106366, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:25:58.330 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=42270720, buflen=4096 00:25:58.330 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:25:58.330 fio: pid=106365, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:25:58.330 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=35631104, buflen=4096 00:25:58.330 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:58.330 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:25:58.589 fio: pid=106363, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:25:58.589 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=48914432, buflen=4096 00:25:58.589 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:58.589 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:25:58.848 fio: pid=106364, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:25:58.848 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=43548672, buflen=4096 00:25:58.848 00:25:58.848 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=106363: Tue Nov 19 17:24:51 2024 00:25:58.848 read: IOPS=3466, BW=13.5MiB/s (14.2MB/s)(46.6MiB/3445msec) 00:25:58.848 slat (usec): min=7, max=9899, avg=18.30, stdev=150.83 00:25:58.848 clat (usec): min=155, max=4218, avg=268.71, stdev=85.27 00:25:58.848 lat (usec): min=165, max=10168, avg=287.01, stdev=174.07 00:25:58.848 clat percentiles (usec): 00:25:58.848 | 1.00th=[ 188], 5.00th=[ 204], 10.00th=[ 215], 20.00th=[ 233], 00:25:58.848 | 30.00th=[ 245], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 273], 00:25:58.848 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 314], 95.00th=[ 330], 00:25:58.848 | 99.00th=[ 375], 99.50th=[ 469], 99.90th=[ 1254], 99.95th=[ 2343], 00:25:58.848 | 99.99th=[ 3523] 00:25:58.848 bw ( KiB/s): min=12704, max=14440, per=30.46%, avg=13650.67, stdev=649.82, samples=6 00:25:58.848 iops : min= 3176, max= 3610, avg=3412.67, stdev=162.45, samples=6 00:25:58.848 lat (usec) : 250=34.28%, 500=65.28%, 750=0.26%, 1000=0.05% 00:25:58.848 lat (msec) : 2=0.07%, 4=0.05%, 10=0.01% 00:25:58.848 cpu : usr=1.10%, sys=4.21%, ctx=11948, majf=0, minf=1 00:25:58.848 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:58.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.848 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.848 issued rwts: total=11943,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.848 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:58.848 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=106364: Tue Nov 19 17:24:51 2024 00:25:58.848 read: IOPS=2864, BW=11.2MiB/s (11.7MB/s)(41.5MiB/3712msec) 00:25:58.848 slat (usec): min=7, max=10358, avg=18.32, stdev=179.99 00:25:58.848 clat (usec): min=146, max=5393, avg=329.43, stdev=146.07 00:25:58.848 lat (usec): min=161, max=10618, avg=347.75, stdev=230.59 00:25:58.848 clat percentiles (usec): 00:25:58.848 | 1.00th=[ 161], 5.00th=[ 174], 10.00th=[ 194], 20.00th=[ 239], 00:25:58.848 | 30.00th=[ 285], 40.00th=[ 306], 50.00th=[ 326], 60.00th=[ 347], 00:25:58.848 | 70.00th=[ 367], 80.00th=[ 392], 90.00th=[ 441], 95.00th=[ 486], 00:25:58.848 | 99.00th=[ 586], 99.50th=[ 685], 99.90th=[ 2540], 99.95th=[ 3523], 00:25:58.848 | 99.99th=[ 4228] 00:25:58.848 bw ( KiB/s): min=10168, max=14540, per=25.08%, avg=11240.57, stdev=1563.40, samples=7 00:25:58.848 iops : min= 2542, max= 3635, avg=2810.14, stdev=390.85, samples=7 00:25:58.848 lat (usec) : 250=22.47%, 500=73.69%, 750=3.48%, 1000=0.14% 00:25:58.848 lat (msec) : 2=0.11%, 4=0.08%, 10=0.03% 00:25:58.848 cpu : usr=0.89%, sys=3.48%, ctx=10646, majf=0, minf=2 00:25:58.848 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:58.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.848 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.848 issued rwts: total=10633,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.848 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:58.848 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=106365: Tue Nov 19 17:24:51 2024 00:25:58.848 read: IOPS=2705, BW=10.6MiB/s (11.1MB/s)(34.0MiB/3216msec) 00:25:58.848 slat (usec): min=7, max=14659, avg=17.29, stdev=175.90 00:25:58.848 clat (usec): min=163, max=8101, avg=350.91, stdev=117.41 00:25:58.848 lat (usec): min=184, max=14989, avg=368.19, stdev=211.21 00:25:58.848 clat percentiles (usec): 00:25:58.848 | 1.00th=[ 202], 5.00th=[ 237], 10.00th=[ 269], 20.00th=[ 293], 00:25:58.848 | 30.00th=[ 306], 40.00th=[ 322], 50.00th=[ 343], 60.00th=[ 359], 00:25:58.848 | 70.00th=[ 379], 80.00th=[ 404], 90.00th=[ 445], 95.00th=[ 482], 00:25:58.848 | 99.00th=[ 578], 99.50th=[ 660], 99.90th=[ 988], 99.95th=[ 1450], 00:25:58.848 | 99.99th=[ 8094] 00:25:58.848 bw ( KiB/s): min=10248, max=11624, per=24.08%, avg=10792.00, stdev=625.00, samples=6 00:25:58.848 iops : min= 2562, max= 2906, avg=2698.00, stdev=156.25, samples=6 00:25:58.848 lat (usec) : 250=6.72%, 500=89.87%, 750=3.16%, 1000=0.16% 00:25:58.848 lat (msec) : 2=0.05%, 4=0.01%, 10=0.01% 00:25:58.849 cpu : usr=0.81%, sys=3.55%, ctx=8716, majf=0, minf=2 00:25:58.849 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:58.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.849 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.849 issued rwts: total=8700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.849 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:58.849 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=106366: Tue Nov 19 17:24:51 2024 00:25:58.849 read: IOPS=3476, BW=13.6MiB/s (14.2MB/s)(40.3MiB/2969msec) 00:25:58.849 slat (nsec): min=11857, max=76814, avg=16855.32, stdev=4852.84 00:25:58.849 clat (usec): min=170, max=3823, avg=269.40, stdev=56.98 00:25:58.849 lat (usec): min=190, max=3843, avg=286.25, stdev=57.70 00:25:58.849 clat percentiles (usec): 00:25:58.849 | 1.00th=[ 194], 5.00th=[ 210], 10.00th=[ 225], 20.00th=[ 243], 00:25:58.849 | 30.00th=[ 253], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 277], 00:25:58.849 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 310], 95.00th=[ 326], 00:25:58.849 | 99.00th=[ 359], 99.50th=[ 371], 99.90th=[ 660], 99.95th=[ 766], 00:25:58.849 | 99.99th=[ 2212] 00:25:58.849 bw ( KiB/s): min=12896, max=15280, per=31.12%, avg=13947.20, stdev=969.13, samples=5 00:25:58.849 iops : min= 3224, max= 3820, avg=3486.80, stdev=242.28, samples=5 00:25:58.849 lat (usec) : 250=26.62%, 500=73.25%, 750=0.07%, 1000=0.02% 00:25:58.849 lat (msec) : 2=0.01%, 4=0.03% 00:25:58.849 cpu : usr=0.91%, sys=4.58%, ctx=10323, majf=0, minf=2 00:25:58.849 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:58.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.849 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.849 issued rwts: total=10321,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.849 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:58.849 00:25:58.849 Run status group 0 (all jobs): 00:25:58.849 READ: bw=43.8MiB/s (45.9MB/s), 10.6MiB/s-13.6MiB/s (11.1MB/s-14.2MB/s), io=162MiB (170MB), run=2969-3712msec 00:25:58.849 00:25:58.849 Disk stats (read/write): 00:25:58.849 nvme0n1: ios=11648/0, merge=0/0, ticks=3166/0, in_queue=3166, util=95.31% 00:25:58.849 nvme0n2: ios=10250/0, merge=0/0, ticks=3360/0, in_queue=3360, util=95.64% 00:25:58.849 nvme0n3: ios=8387/0, merge=0/0, ticks=2895/0, in_queue=2895, util=95.96% 00:25:58.849 nvme0n4: ios=9970/0, merge=0/0, ticks=2737/0, in_queue=2737, util=96.69% 00:25:58.849 17:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:58.849 17:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:25:59.107 17:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:59.107 17:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:25:59.365 17:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:59.365 17:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:25:59.624 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:59.624 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:25:59.882 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:59.882 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:26:00.140 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:26:00.140 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 106323 00:26:00.140 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:26:00.140 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:00.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:00.140 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:00.140 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:26:00.140 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:00.140 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:00.140 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:00.140 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:00.140 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:26:00.140 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:26:00.140 nvmf hotplug test: fio failed as expected 00:26:00.140 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:26:00.140 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:00.399 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:26:00.399 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:26:00.399 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:26:00.399 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:26:00.399 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:26:00.399 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:00.399 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:26:00.399 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:00.399 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:26:00.399 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:00.399 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:00.658 rmmod nvme_tcp 00:26:00.658 rmmod nvme_fabrics 00:26:00.658 rmmod nvme_keyring 00:26:00.658 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:00.658 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:26:00.658 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:26:00.658 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 105840 ']' 00:26:00.658 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 105840 00:26:00.658 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 105840 ']' 00:26:00.658 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 105840 00:26:00.658 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:26:00.658 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:00.658 17:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105840 00:26:00.658 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:00.658 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:00.658 killing process with pid 105840 00:26:00.658 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105840' 00:26:00.658 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 105840 00:26:00.658 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 105840 00:26:00.918 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:00.918 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:00.918 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:00.918 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:26:00.918 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:26:00.918 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:00.918 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:26:00.918 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:00.918 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:00.918 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:00.918 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:00.918 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:00.918 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:00.918 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:00.918 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:00.918 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:00.918 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:00.918 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:00.918 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:00.918 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:00.918 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:00.918 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:01.178 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:01.178 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.178 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:01.178 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.178 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:26:01.178 00:26:01.178 real 0m19.721s 00:26:01.178 user 1m0.483s 00:26:01.178 sys 0m9.551s 00:26:01.178 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:01.178 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:26:01.178 ************************************ 00:26:01.178 END TEST nvmf_fio_target 00:26:01.178 ************************************ 00:26:01.178 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:26:01.178 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:01.178 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:01.178 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:01.178 ************************************ 00:26:01.178 START TEST nvmf_bdevio 00:26:01.178 ************************************ 00:26:01.178 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:26:01.178 * Looking for test storage... 00:26:01.178 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:01.178 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:01.178 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:01.178 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:26:01.438 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:01.438 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:01.438 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:01.438 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:01.438 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:26:01.438 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:26:01.438 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:01.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.439 --rc genhtml_branch_coverage=1 00:26:01.439 --rc genhtml_function_coverage=1 00:26:01.439 --rc genhtml_legend=1 00:26:01.439 --rc geninfo_all_blocks=1 00:26:01.439 --rc geninfo_unexecuted_blocks=1 00:26:01.439 00:26:01.439 ' 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:01.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.439 --rc genhtml_branch_coverage=1 00:26:01.439 --rc genhtml_function_coverage=1 00:26:01.439 --rc genhtml_legend=1 00:26:01.439 --rc geninfo_all_blocks=1 00:26:01.439 --rc geninfo_unexecuted_blocks=1 00:26:01.439 00:26:01.439 ' 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:01.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.439 --rc genhtml_branch_coverage=1 00:26:01.439 --rc genhtml_function_coverage=1 00:26:01.439 --rc genhtml_legend=1 00:26:01.439 --rc geninfo_all_blocks=1 00:26:01.439 --rc geninfo_unexecuted_blocks=1 00:26:01.439 00:26:01.439 ' 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:01.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.439 --rc genhtml_branch_coverage=1 00:26:01.439 --rc genhtml_function_coverage=1 00:26:01.439 --rc genhtml_legend=1 00:26:01.439 --rc geninfo_all_blocks=1 00:26:01.439 --rc geninfo_unexecuted_blocks=1 00:26:01.439 00:26:01.439 ' 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:01.439 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:01.440 Cannot find device "nvmf_init_br" 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:01.440 Cannot find device "nvmf_init_br2" 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:01.440 Cannot find device "nvmf_tgt_br" 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:01.440 Cannot find device "nvmf_tgt_br2" 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:01.440 Cannot find device "nvmf_init_br" 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:01.440 Cannot find device "nvmf_init_br2" 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:01.440 Cannot find device "nvmf_tgt_br" 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:01.440 Cannot find device "nvmf_tgt_br2" 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:01.440 Cannot find device "nvmf_br" 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:01.440 Cannot find device "nvmf_init_if" 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:01.440 Cannot find device "nvmf_init_if2" 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:01.440 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:01.440 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:01.440 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:01.700 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:01.700 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:01.700 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:01.700 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:01.700 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:01.700 17:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:01.700 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:01.700 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:26:01.700 00:26:01.700 --- 10.0.0.3 ping statistics --- 00:26:01.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.700 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:01.700 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:01.700 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:26:01.700 00:26:01.700 --- 10.0.0.4 ping statistics --- 00:26:01.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.700 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:01.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:01.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:26:01.700 00:26:01.700 --- 10.0.0.1 ping statistics --- 00:26:01.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.700 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:01.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:01.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.036 ms 00:26:01.700 00:26:01.700 --- 10.0.0.2 ping statistics --- 00:26:01.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.700 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=106734 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 106734 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 106734 ']' 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.700 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:01.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:01.701 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.701 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:01.701 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:01.701 [2024-11-19 17:24:54.195067] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:01.701 [2024-11-19 17:24:54.196062] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:26:01.701 [2024-11-19 17:24:54.196106] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:01.959 [2024-11-19 17:24:54.343639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:01.959 [2024-11-19 17:24:54.399094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:01.959 [2024-11-19 17:24:54.399164] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:01.959 [2024-11-19 17:24:54.399179] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:01.959 [2024-11-19 17:24:54.399189] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:01.959 [2024-11-19 17:24:54.399198] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:01.959 [2024-11-19 17:24:54.400491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:01.959 [2024-11-19 17:24:54.400584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:01.959 [2024-11-19 17:24:54.400864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:01.959 [2024-11-19 17:24:54.400869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:02.219 [2024-11-19 17:24:54.496050] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:02.219 [2024-11-19 17:24:54.496195] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:02.219 [2024-11-19 17:24:54.496636] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:02.219 [2024-11-19 17:24:54.496947] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:02.219 [2024-11-19 17:24:54.497268] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:02.219 [2024-11-19 17:24:54.585885] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:02.219 Malloc0 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:02.219 [2024-11-19 17:24:54.661952] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:02.219 { 00:26:02.219 "params": { 00:26:02.219 "name": "Nvme$subsystem", 00:26:02.219 "trtype": "$TEST_TRANSPORT", 00:26:02.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:02.219 "adrfam": "ipv4", 00:26:02.219 "trsvcid": "$NVMF_PORT", 00:26:02.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:02.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:02.219 "hdgst": ${hdgst:-false}, 00:26:02.219 "ddgst": ${ddgst:-false} 00:26:02.219 }, 00:26:02.219 "method": "bdev_nvme_attach_controller" 00:26:02.219 } 00:26:02.219 EOF 00:26:02.219 )") 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:26:02.219 17:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:02.219 "params": { 00:26:02.219 "name": "Nvme1", 00:26:02.219 "trtype": "tcp", 00:26:02.219 "traddr": "10.0.0.3", 00:26:02.219 "adrfam": "ipv4", 00:26:02.219 "trsvcid": "4420", 00:26:02.219 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:02.219 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:02.219 "hdgst": false, 00:26:02.219 "ddgst": false 00:26:02.219 }, 00:26:02.219 "method": "bdev_nvme_attach_controller" 00:26:02.219 }' 00:26:02.219 [2024-11-19 17:24:54.734126] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:26:02.219 [2024-11-19 17:24:54.734208] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106781 ] 00:26:02.477 [2024-11-19 17:24:54.872920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:02.477 [2024-11-19 17:24:54.931117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.477 [2024-11-19 17:24:54.931243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:02.477 [2024-11-19 17:24:54.931251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.735 I/O targets: 00:26:02.735 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:26:02.735 00:26:02.735 00:26:02.735 CUnit - A unit testing framework for C - Version 2.1-3 00:26:02.735 http://cunit.sourceforge.net/ 00:26:02.735 00:26:02.735 00:26:02.735 Suite: bdevio tests on: Nvme1n1 00:26:02.735 Test: blockdev write read block ...passed 00:26:02.735 Test: blockdev write zeroes read block ...passed 00:26:02.735 Test: blockdev write zeroes read no split ...passed 00:26:02.735 Test: blockdev write zeroes read split ...passed 00:26:02.735 Test: blockdev write zeroes read split partial ...passed 00:26:02.735 Test: blockdev reset ...[2024-11-19 17:24:55.243512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:02.735 [2024-11-19 17:24:55.243625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8daf50 (9): Bad file descriptor 00:26:02.735 [2024-11-19 17:24:55.247552] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:26:02.735 passed 00:26:02.735 Test: blockdev write read 8 blocks ...passed 00:26:02.735 Test: blockdev write read size > 128k ...passed 00:26:02.735 Test: blockdev write read invalid size ...passed 00:26:02.993 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:02.993 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:02.993 Test: blockdev write read max offset ...passed 00:26:02.993 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:02.993 Test: blockdev writev readv 8 blocks ...passed 00:26:02.993 Test: blockdev writev readv 30 x 1block ...passed 00:26:02.993 Test: blockdev writev readv block ...passed 00:26:02.993 Test: blockdev writev readv size > 128k ...passed 00:26:02.993 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:02.993 Test: blockdev comparev and writev ...[2024-11-19 17:24:55.425428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:02.993 [2024-11-19 17:24:55.425483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.993 [2024-11-19 17:24:55.425502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:02.993 [2024-11-19 17:24:55.425513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:02.993 [2024-11-19 17:24:55.426191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:02.993 [2024-11-19 17:24:55.426245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:02.993 [2024-11-19 17:24:55.426263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:02.993 [2024-11-19 17:24:55.426273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:02.993 [2024-11-19 17:24:55.426901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:02.993 [2024-11-19 17:24:55.426957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:02.993 [2024-11-19 17:24:55.426974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:02.993 [2024-11-19 17:24:55.426984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:02.994 [2024-11-19 17:24:55.427518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:02.994 [2024-11-19 17:24:55.427547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:02.994 [2024-11-19 17:24:55.427564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:02.994 [2024-11-19 17:24:55.427573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:02.994 passed 00:26:02.994 Test: blockdev nvme passthru rw ...passed 00:26:02.994 Test: blockdev nvme passthru vendor specific ...[2024-11-19 17:24:55.511113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:02.994 [2024-11-19 17:24:55.511141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:02.994 [2024-11-19 17:24:55.511500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:02.994 [2024-11-19 17:24:55.511530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:02.994 [2024-11-19 17:24:55.511781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:02.994 [2024-11-19 17:24:55.511877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:02.994 [2024-11-19 17:24:55.512211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:02.994 [2024-11-19 17:24:55.512242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:02.994 passed 00:26:02.994 Test: blockdev nvme admin passthru ...passed 00:26:03.252 Test: blockdev copy ...passed 00:26:03.252 00:26:03.252 Run Summary: Type Total Ran Passed Failed Inactive 00:26:03.252 suites 1 1 n/a 0 0 00:26:03.252 tests 23 23 23 0 0 00:26:03.252 asserts 152 152 152 0 n/a 00:26:03.252 00:26:03.252 Elapsed time = 0.864 seconds 00:26:03.522 17:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:03.522 17:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.522 17:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:03.522 17:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.522 17:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:26:03.522 17:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:26:03.522 17:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:03.522 17:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:26:03.522 17:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:03.522 17:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:26:03.522 17:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:03.522 17:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:03.522 rmmod nvme_tcp 00:26:03.522 rmmod nvme_fabrics 00:26:03.522 rmmod nvme_keyring 00:26:03.522 17:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:03.522 17:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:26:03.522 17:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:26:03.522 17:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 106734 ']' 00:26:03.522 17:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 106734 00:26:03.522 17:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 106734 ']' 00:26:03.522 17:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 106734 00:26:03.522 17:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:26:03.522 17:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:03.522 17:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106734 00:26:03.522 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:26:03.522 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:26:03.522 killing process with pid 106734 00:26:03.522 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106734' 00:26:03.522 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 106734 00:26:03.522 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 106734 00:26:03.793 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:03.793 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:03.793 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:03.793 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:26:03.793 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:26:03.793 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:26:03.793 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:03.793 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:03.793 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:03.793 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:04.051 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:04.051 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:04.051 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:04.051 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:04.051 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:04.051 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:04.051 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:04.051 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:04.051 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:04.051 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:04.051 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:04.051 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:04.051 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:04.051 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.051 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:04.051 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.310 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:26:04.310 00:26:04.310 real 0m3.033s 00:26:04.310 user 0m7.769s 00:26:04.310 sys 0m1.261s 00:26:04.310 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:04.310 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:04.310 ************************************ 00:26:04.310 END TEST nvmf_bdevio 00:26:04.310 ************************************ 00:26:04.310 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:26:04.310 00:26:04.310 real 3m29.647s 00:26:04.310 user 9m29.225s 00:26:04.310 sys 1m15.817s 00:26:04.310 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:04.310 17:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:04.310 ************************************ 00:26:04.310 END TEST nvmf_target_core_interrupt_mode 00:26:04.310 ************************************ 00:26:04.310 17:24:56 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:26:04.310 17:24:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:04.310 17:24:56 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:04.310 17:24:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:04.310 ************************************ 00:26:04.310 START TEST nvmf_interrupt 00:26:04.310 ************************************ 00:26:04.310 17:24:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:26:04.310 * Looking for test storage... 00:26:04.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:04.310 17:24:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:04.310 17:24:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:26:04.310 17:24:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:04.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.570 --rc genhtml_branch_coverage=1 00:26:04.570 --rc genhtml_function_coverage=1 00:26:04.570 --rc genhtml_legend=1 00:26:04.570 --rc geninfo_all_blocks=1 00:26:04.570 --rc geninfo_unexecuted_blocks=1 00:26:04.570 00:26:04.570 ' 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:04.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.570 --rc genhtml_branch_coverage=1 00:26:04.570 --rc genhtml_function_coverage=1 00:26:04.570 --rc genhtml_legend=1 00:26:04.570 --rc geninfo_all_blocks=1 00:26:04.570 --rc geninfo_unexecuted_blocks=1 00:26:04.570 00:26:04.570 ' 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:04.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.570 --rc genhtml_branch_coverage=1 00:26:04.570 --rc genhtml_function_coverage=1 00:26:04.570 --rc genhtml_legend=1 00:26:04.570 --rc geninfo_all_blocks=1 00:26:04.570 --rc geninfo_unexecuted_blocks=1 00:26:04.570 00:26:04.570 ' 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:04.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.570 --rc genhtml_branch_coverage=1 00:26:04.570 --rc genhtml_function_coverage=1 00:26:04.570 --rc genhtml_legend=1 00:26:04.570 --rc geninfo_all_blocks=1 00:26:04.570 --rc geninfo_unexecuted_blocks=1 00:26:04.570 00:26:04.570 ' 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:04.570 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:04.571 Cannot find device "nvmf_init_br" 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # true 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:04.571 Cannot find device "nvmf_init_br2" 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # true 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:04.571 Cannot find device "nvmf_tgt_br" 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # true 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:04.571 Cannot find device "nvmf_tgt_br2" 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # true 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:04.571 Cannot find device "nvmf_init_br" 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # true 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:04.571 Cannot find device "nvmf_init_br2" 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # true 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:04.571 Cannot find device "nvmf_tgt_br" 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # true 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:04.571 Cannot find device "nvmf_tgt_br2" 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # true 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:04.571 Cannot find device "nvmf_br" 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # true 00:26:04.571 17:24:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:04.571 Cannot find device "nvmf_init_if" 00:26:04.571 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # true 00:26:04.571 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:04.571 Cannot find device "nvmf_init_if2" 00:26:04.571 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # true 00:26:04.571 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:04.571 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:04.571 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # true 00:26:04.571 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:04.571 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:04.571 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # true 00:26:04.571 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:04.571 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:04.571 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:04.571 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:04.571 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:04.571 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:04.571 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:04.831 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:04.831 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:26:04.831 00:26:04.831 --- 10.0.0.3 ping statistics --- 00:26:04.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.831 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:04.831 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:04.831 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.031 ms 00:26:04.831 00:26:04.831 --- 10.0.0.4 ping statistics --- 00:26:04.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.831 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:04.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:04.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:26:04.831 00:26:04.831 --- 10.0.0.1 ping statistics --- 00:26:04.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.831 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:04.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:04.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:26:04.831 00:26:04.831 --- 10.0.0.2 ping statistics --- 00:26:04.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.831 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@461 -- # return 0 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=107026 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 107026 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 107026 ']' 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:04.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:04.831 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:26:04.831 [2024-11-19 17:24:57.344365] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:04.831 [2024-11-19 17:24:57.345357] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:26:04.831 [2024-11-19 17:24:57.345404] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:05.090 [2024-11-19 17:24:57.485761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:05.090 [2024-11-19 17:24:57.544205] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:05.090 [2024-11-19 17:24:57.544265] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:05.090 [2024-11-19 17:24:57.544274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:05.090 [2024-11-19 17:24:57.544282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:05.090 [2024-11-19 17:24:57.544288] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:05.090 [2024-11-19 17:24:57.545673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.090 [2024-11-19 17:24:57.545680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.350 [2024-11-19 17:24:57.663040] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:05.350 [2024-11-19 17:24:57.663661] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:05.350 [2024-11-19 17:24:57.663820] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:26:05.350 5000+0 records in 00:26:05.350 5000+0 records out 00:26:05.350 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0353083 s, 290 MB/s 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile AIO0 2048 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:05.350 AIO0 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:05.350 [2024-11-19 17:24:57.838525] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:05.350 [2024-11-19 17:24:57.866994] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 107026 0 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107026 0 idle 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107026 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107026 -w 256 00:26:05.350 17:24:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:26:05.609 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107026 root 20 0 64.2g 45184 32640 S 0.0 0.4 0:00.28 reactor_0' 00:26:05.609 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107026 root 20 0 64.2g 45184 32640 S 0.0 0.4 0:00.28 reactor_0 00:26:05.609 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:05.609 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:05.609 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:26:05.609 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:26:05.609 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:26:05.609 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:26:05.609 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:26:05.609 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:05.609 17:24:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:26:05.609 17:24:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 107026 1 00:26:05.609 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107026 1 idle 00:26:05.609 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107026 00:26:05.609 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:26:05.609 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:05.609 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:26:05.609 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:05.609 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:26:05.609 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:26:05.609 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:05.609 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:05.609 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:05.609 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107026 -w 256 00:26:05.609 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:26:05.868 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107037 root 20 0 64.2g 45184 32640 S 0.0 0.4 0:00.00 reactor_1' 00:26:05.868 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107037 root 20 0 64.2g 45184 32640 S 0.0 0.4 0:00.00 reactor_1 00:26:05.868 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:05.868 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:05.868 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:26:05.868 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:26:05.869 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:26:05.869 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:26:05.869 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:26:05.869 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:05.869 17:24:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:26:05.869 17:24:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=107083 00:26:05.869 17:24:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:05.869 17:24:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:26:05.869 17:24:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:26:05.869 17:24:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 107026 0 00:26:05.869 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 107026 0 busy 00:26:05.869 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107026 00:26:05.869 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:26:05.869 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:26:05.869 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:26:05.869 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:05.869 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:26:05.869 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:05.869 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:05.869 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:05.869 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:26:05.869 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107026 -w 256 00:26:06.127 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107026 root 20 0 64.2g 45184 32640 S 0.0 0.4 0:00.28 reactor_0' 00:26:06.127 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107026 root 20 0 64.2g 45184 32640 S 0.0 0.4 0:00.28 reactor_0 00:26:06.127 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:06.128 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:06.128 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:26:06.128 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:26:06.128 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:26:06.128 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:26:06.128 17:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:26:07.064 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:26:07.064 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:07.064 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107026 -w 256 00:26:07.064 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:26:07.064 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107026 root 20 0 64.2g 46464 33024 R 99.9 0.4 0:01.59 reactor_0' 00:26:07.064 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107026 root 20 0 64.2g 46464 33024 R 99.9 0.4 0:01.59 reactor_0 00:26:07.064 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:07.064 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:07.064 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:26:07.064 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:26:07.064 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:26:07.064 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:26:07.064 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:26:07.064 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:07.064 17:24:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:26:07.064 17:24:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:26:07.323 17:24:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 107026 1 00:26:07.323 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 107026 1 busy 00:26:07.323 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107026 00:26:07.323 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:26:07.323 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:26:07.323 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:26:07.323 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:07.323 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:26:07.323 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:07.323 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:07.323 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:07.323 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107026 -w 256 00:26:07.323 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:26:07.323 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107037 root 20 0 64.2g 46464 33024 R 66.7 0.4 0:00.77 reactor_1' 00:26:07.323 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107037 root 20 0 64.2g 46464 33024 R 66.7 0.4 0:00.77 reactor_1 00:26:07.323 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:07.323 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:07.323 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=66.7 00:26:07.323 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=66 00:26:07.323 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:26:07.323 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:26:07.323 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:26:07.323 17:24:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:07.323 17:24:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 107083 00:26:17.298 Initializing NVMe Controllers 00:26:17.298 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:17.298 Controller IO queue size 256, less than required. 00:26:17.298 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:17.298 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:26:17.298 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:26:17.298 Initialization complete. Launching workers. 00:26:17.298 ======================================================== 00:26:17.298 Latency(us) 00:26:17.298 Device Information : IOPS MiB/s Average min max 00:26:17.298 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 6678.20 26.09 38390.53 6558.86 224122.80 00:26:17.298 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 7694.40 30.06 33317.89 7372.42 139196.32 00:26:17.298 ======================================================== 00:26:17.298 Total : 14372.59 56.14 35674.88 6558.86 224122.80 00:26:17.298 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 107026 0 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107026 0 idle 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107026 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107026 -w 256 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107026 root 20 0 64.2g 46464 33024 S 0.0 0.4 0:13.59 reactor_0' 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107026 root 20 0 64.2g 46464 33024 S 0.0 0.4 0:13.59 reactor_0 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 107026 1 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107026 1 idle 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107026 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107026 -w 256 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107037 root 20 0 64.2g 46464 33024 S 0.0 0.4 0:06.53 reactor_1' 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107037 root 20 0 64.2g 46464 33024 S 0.0 0.4 0:06.53 reactor_1 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:17.298 17:25:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:26:18.674 17:25:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:18.674 17:25:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:18.674 17:25:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:18.674 17:25:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:18.675 17:25:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:18.675 17:25:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:26:18.675 17:25:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:26:18.675 17:25:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 107026 0 00:26:18.675 17:25:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107026 0 idle 00:26:18.675 17:25:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107026 00:26:18.675 17:25:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:26:18.675 17:25:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:18.675 17:25:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:26:18.675 17:25:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:18.675 17:25:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:26:18.675 17:25:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:26:18.675 17:25:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:18.675 17:25:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:18.675 17:25:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:18.675 17:25:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107026 -w 256 00:26:18.675 17:25:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:26:18.675 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107026 root 20 0 64.2g 48512 33024 S 0.0 0.4 0:13.65 reactor_0' 00:26:18.675 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107026 root 20 0 64.2g 48512 33024 S 0.0 0.4 0:13.65 reactor_0 00:26:18.675 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:18.675 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:18.675 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:26:18.675 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:26:18.675 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:26:18.675 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:26:18.675 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:26:18.675 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:18.675 17:25:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:26:18.675 17:25:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 107026 1 00:26:18.675 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107026 1 idle 00:26:18.675 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107026 00:26:18.675 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:26:18.675 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:18.675 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:26:18.675 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:18.675 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:26:18.675 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:26:18.675 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:18.675 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:18.675 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:18.675 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107026 -w 256 00:26:18.675 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:26:18.942 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107037 root 20 0 64.2g 48512 33024 S 0.0 0.4 0:06.55 reactor_1' 00:26:18.942 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107037 root 20 0 64.2g 48512 33024 S 0.0 0.4 0:06.55 reactor_1 00:26:18.942 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:18.942 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:18.942 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:26:18.942 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:26:18.942 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:26:18.942 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:26:18.942 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:26:18.942 17:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:18.942 17:25:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:18.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:18.942 17:25:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:18.942 17:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:26:18.942 17:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:18.942 17:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:18.942 17:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:18.942 17:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:18.942 17:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:26:18.942 17:25:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:26:18.942 17:25:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:26:18.942 17:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:18.942 17:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:26:19.202 17:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:19.202 17:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:26:19.202 17:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:19.202 17:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:19.202 rmmod nvme_tcp 00:26:19.202 rmmod nvme_fabrics 00:26:19.202 rmmod nvme_keyring 00:26:19.202 17:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:19.202 17:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:26:19.202 17:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:26:19.202 17:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 107026 ']' 00:26:19.202 17:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 107026 00:26:19.202 17:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 107026 ']' 00:26:19.202 17:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 107026 00:26:19.202 17:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:26:19.202 17:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:19.202 17:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107026 00:26:19.202 17:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:19.202 killing process with pid 107026 00:26:19.202 17:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:19.202 17:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107026' 00:26:19.202 17:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 107026 00:26:19.202 17:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 107026 00:26:19.461 17:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:19.461 17:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:19.461 17:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:19.461 17:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:26:19.461 17:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:26:19.461 17:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:19.461 17:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:26:19.720 17:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:19.720 17:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:19.720 17:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:19.720 17:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:19.720 17:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:19.720 17:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:19.720 17:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:19.720 17:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:19.720 17:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:19.720 17:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:19.720 17:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:19.720 17:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:19.720 17:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:19.720 17:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:19.720 17:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:19.720 17:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:19.720 17:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.720 17:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:19.720 17:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.720 17:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@300 -- # return 0 00:26:19.720 00:26:19.720 real 0m15.563s 00:26:19.720 user 0m24.878s 00:26:19.720 sys 0m7.526s 00:26:19.720 17:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:19.720 17:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:19.720 ************************************ 00:26:19.720 END TEST nvmf_interrupt 00:26:19.720 ************************************ 00:26:19.979 00:26:19.979 real 20m30.124s 00:26:19.979 user 53m49.251s 00:26:19.979 sys 5m5.383s 00:26:19.979 17:25:12 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:19.979 ************************************ 00:26:19.979 END TEST nvmf_tcp 00:26:19.979 ************************************ 00:26:19.979 17:25:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:19.979 17:25:12 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:26:19.979 17:25:12 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:19.979 17:25:12 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:19.979 17:25:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:19.979 17:25:12 -- common/autotest_common.sh@10 -- # set +x 00:26:19.979 ************************************ 00:26:19.979 START TEST spdkcli_nvmf_tcp 00:26:19.979 ************************************ 00:26:19.979 17:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:19.979 * Looking for test storage... 00:26:19.979 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:26:19.979 17:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:19.979 17:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:26:19.979 17:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:19.979 17:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:19.979 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:19.979 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:19.979 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:19.979 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:26:19.979 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:26:19.979 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:26:19.979 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:26:19.979 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:19.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.980 --rc genhtml_branch_coverage=1 00:26:19.980 --rc genhtml_function_coverage=1 00:26:19.980 --rc genhtml_legend=1 00:26:19.980 --rc geninfo_all_blocks=1 00:26:19.980 --rc geninfo_unexecuted_blocks=1 00:26:19.980 00:26:19.980 ' 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:19.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.980 --rc genhtml_branch_coverage=1 00:26:19.980 --rc genhtml_function_coverage=1 00:26:19.980 --rc genhtml_legend=1 00:26:19.980 --rc geninfo_all_blocks=1 00:26:19.980 --rc geninfo_unexecuted_blocks=1 00:26:19.980 00:26:19.980 ' 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:19.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.980 --rc genhtml_branch_coverage=1 00:26:19.980 --rc genhtml_function_coverage=1 00:26:19.980 --rc genhtml_legend=1 00:26:19.980 --rc geninfo_all_blocks=1 00:26:19.980 --rc geninfo_unexecuted_blocks=1 00:26:19.980 00:26:19.980 ' 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:19.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.980 --rc genhtml_branch_coverage=1 00:26:19.980 --rc genhtml_function_coverage=1 00:26:19.980 --rc genhtml_legend=1 00:26:19.980 --rc geninfo_all_blocks=1 00:26:19.980 --rc geninfo_unexecuted_blocks=1 00:26:19.980 00:26:19.980 ' 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:19.980 17:25:12 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:20.239 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=107438 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 107438 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 107438 ']' 00:26:20.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:20.239 17:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:20.239 [2024-11-19 17:25:12.597001] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:26:20.239 [2024-11-19 17:25:12.597270] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107438 ] 00:26:20.239 [2024-11-19 17:25:12.750831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:20.499 [2024-11-19 17:25:12.819920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.499 [2024-11-19 17:25:12.819935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.499 17:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:20.499 17:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:26:20.499 17:25:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:20.499 17:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:20.499 17:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:20.499 17:25:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:20.499 17:25:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:20.499 17:25:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:20.499 17:25:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:20.499 17:25:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:20.759 17:25:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:20.759 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:20.759 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:20.759 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:20.759 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:20.759 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:20.759 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:20.759 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:20.759 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:20.759 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:20.759 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:20.759 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:20.759 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:20.759 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:20.759 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:20.759 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:20.759 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:20.759 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:20.759 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:20.759 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:20.759 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:20.759 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:20.759 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:20.759 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:20.759 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:20.759 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:20.759 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:20.759 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:20.759 ' 00:26:24.047 [2024-11-19 17:25:15.848328] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.984 [2024-11-19 17:25:17.169183] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:26:27.516 [2024-11-19 17:25:19.622527] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:26:29.420 [2024-11-19 17:25:21.735720] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:26:31.325 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:26:31.325 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:26:31.325 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:26:31.325 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:26:31.325 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:26:31.325 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:26:31.325 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:26:31.325 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:31.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:26:31.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:26:31.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:31.325 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:31.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:26:31.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:31.325 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:31.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:26:31.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:31.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:31.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:31.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:31.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:26:31.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:26:31.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:31.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:26:31.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:31.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:26:31.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:26:31.325 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:26:31.325 17:25:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:26:31.325 17:25:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:31.325 17:25:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:31.325 17:25:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:26:31.325 17:25:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:31.325 17:25:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:31.325 17:25:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:26:31.325 17:25:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:26:31.584 17:25:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:26:31.584 17:25:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:26:31.584 17:25:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:26:31.584 17:25:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:31.584 17:25:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:31.584 17:25:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:26:31.584 17:25:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:31.584 17:25:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:31.584 17:25:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:26:31.584 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:26:31.584 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:31.584 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:26:31.584 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:26:31.584 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:26:31.584 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:26:31.584 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:31.584 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:26:31.584 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:26:31.584 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:26:31.584 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:26:31.584 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:26:31.584 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:26:31.584 ' 00:26:38.152 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:38.152 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:38.152 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:38.152 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:38.152 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:26:38.152 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:26:38.152 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:38.152 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:38.152 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:38.152 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:38.152 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:38.152 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:38.152 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:38.152 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:38.152 17:25:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:38.152 17:25:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:38.152 17:25:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:38.152 17:25:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 107438 00:26:38.152 17:25:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 107438 ']' 00:26:38.152 17:25:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 107438 00:26:38.152 17:25:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:26:38.152 17:25:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:38.152 17:25:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107438 00:26:38.152 17:25:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:38.152 17:25:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:38.152 17:25:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107438' 00:26:38.152 killing process with pid 107438 00:26:38.152 17:25:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 107438 00:26:38.152 17:25:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 107438 00:26:38.152 17:25:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:26:38.152 17:25:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:26:38.152 17:25:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 107438 ']' 00:26:38.152 17:25:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 107438 00:26:38.152 17:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 107438 ']' 00:26:38.152 17:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 107438 00:26:38.152 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (107438) - No such process 00:26:38.152 Process with pid 107438 is not found 00:26:38.152 17:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 107438 is not found' 00:26:38.152 17:25:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:38.152 17:25:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:38.152 17:25:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:38.152 ************************************ 00:26:38.152 END TEST spdkcli_nvmf_tcp 00:26:38.152 ************************************ 00:26:38.152 00:26:38.152 real 0m17.732s 00:26:38.152 user 0m38.553s 00:26:38.152 sys 0m0.971s 00:26:38.152 17:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:38.152 17:25:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:38.152 17:25:30 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:38.152 17:25:30 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:38.152 17:25:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:38.152 17:25:30 -- common/autotest_common.sh@10 -- # set +x 00:26:38.152 ************************************ 00:26:38.152 START TEST nvmf_identify_passthru 00:26:38.152 ************************************ 00:26:38.152 17:25:30 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:38.152 * Looking for test storage... 00:26:38.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:38.152 17:25:30 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:38.152 17:25:30 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:26:38.152 17:25:30 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:38.152 17:25:30 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:38.152 17:25:30 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:38.152 17:25:30 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:38.152 17:25:30 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:38.152 17:25:30 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:26:38.152 17:25:30 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:26:38.152 17:25:30 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:26:38.152 17:25:30 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:26:38.152 17:25:30 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:26:38.152 17:25:30 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:26:38.152 17:25:30 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:26:38.152 17:25:30 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:38.152 17:25:30 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:26:38.152 17:25:30 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:26:38.152 17:25:30 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:38.152 17:25:30 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:38.152 17:25:30 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:26:38.152 17:25:30 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:26:38.152 17:25:30 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:38.152 17:25:30 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:26:38.152 17:25:30 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:26:38.152 17:25:30 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:26:38.152 17:25:30 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:26:38.152 17:25:30 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:38.152 17:25:30 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:26:38.152 17:25:30 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:26:38.152 17:25:30 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:38.152 17:25:30 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:38.152 17:25:30 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:26:38.152 17:25:30 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:38.152 17:25:30 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:38.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.152 --rc genhtml_branch_coverage=1 00:26:38.152 --rc genhtml_function_coverage=1 00:26:38.152 --rc genhtml_legend=1 00:26:38.152 --rc geninfo_all_blocks=1 00:26:38.152 --rc geninfo_unexecuted_blocks=1 00:26:38.152 00:26:38.152 ' 00:26:38.152 17:25:30 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:38.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.152 --rc genhtml_branch_coverage=1 00:26:38.152 --rc genhtml_function_coverage=1 00:26:38.152 --rc genhtml_legend=1 00:26:38.152 --rc geninfo_all_blocks=1 00:26:38.152 --rc geninfo_unexecuted_blocks=1 00:26:38.152 00:26:38.152 ' 00:26:38.152 17:25:30 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:38.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.152 --rc genhtml_branch_coverage=1 00:26:38.152 --rc genhtml_function_coverage=1 00:26:38.152 --rc genhtml_legend=1 00:26:38.152 --rc geninfo_all_blocks=1 00:26:38.152 --rc geninfo_unexecuted_blocks=1 00:26:38.152 00:26:38.152 ' 00:26:38.152 17:25:30 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:38.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.152 --rc genhtml_branch_coverage=1 00:26:38.152 --rc genhtml_function_coverage=1 00:26:38.152 --rc genhtml_legend=1 00:26:38.152 --rc geninfo_all_blocks=1 00:26:38.152 --rc geninfo_unexecuted_blocks=1 00:26:38.152 00:26:38.152 ' 00:26:38.152 17:25:30 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:38.152 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:26:38.152 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:38.152 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:38.152 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:38.152 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:38.152 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:38.153 17:25:30 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:26:38.153 17:25:30 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:38.153 17:25:30 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:38.153 17:25:30 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:38.153 17:25:30 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.153 17:25:30 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.153 17:25:30 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.153 17:25:30 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:38.153 17:25:30 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:38.153 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:38.153 17:25:30 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:38.153 17:25:30 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:26:38.153 17:25:30 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:38.153 17:25:30 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:38.153 17:25:30 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:38.153 17:25:30 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.153 17:25:30 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.153 17:25:30 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.153 17:25:30 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:38.153 17:25:30 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.153 17:25:30 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.153 17:25:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:38.153 17:25:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:38.153 Cannot find device "nvmf_init_br" 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:38.153 Cannot find device "nvmf_init_br2" 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:38.153 Cannot find device "nvmf_tgt_br" 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@164 -- # true 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:38.153 Cannot find device "nvmf_tgt_br2" 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@165 -- # true 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:38.153 Cannot find device "nvmf_init_br" 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@166 -- # true 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:38.153 Cannot find device "nvmf_init_br2" 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@167 -- # true 00:26:38.153 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:38.153 Cannot find device "nvmf_tgt_br" 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@168 -- # true 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:38.154 Cannot find device "nvmf_tgt_br2" 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@169 -- # true 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:38.154 Cannot find device "nvmf_br" 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@170 -- # true 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:38.154 Cannot find device "nvmf_init_if" 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@171 -- # true 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:38.154 Cannot find device "nvmf_init_if2" 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@172 -- # true 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:38.154 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@173 -- # true 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:38.154 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@174 -- # true 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:38.154 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:38.413 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:38.413 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:38.413 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:38.413 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:38.413 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:38.413 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:38.413 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:38.413 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:38.413 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:38.413 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:26:38.413 00:26:38.413 --- 10.0.0.3 ping statistics --- 00:26:38.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.413 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:26:38.413 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:38.413 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:38.413 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:26:38.413 00:26:38.413 --- 10.0.0.4 ping statistics --- 00:26:38.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.413 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:26:38.413 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:38.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:38.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:26:38.413 00:26:38.413 --- 10.0.0.1 ping statistics --- 00:26:38.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.413 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:26:38.413 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:38.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:38.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:26:38.413 00:26:38.413 --- 10.0.0.2 ping statistics --- 00:26:38.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.413 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:26:38.413 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:38.413 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@461 -- # return 0 00:26:38.413 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:38.413 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:38.413 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:38.413 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:38.413 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:38.413 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:38.413 17:25:30 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:38.413 17:25:30 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:26:38.413 17:25:30 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:38.413 17:25:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:38.413 17:25:30 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:26:38.413 17:25:30 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:26:38.413 17:25:30 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:26:38.413 17:25:30 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:26:38.413 17:25:30 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:26:38.413 17:25:30 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:26:38.413 17:25:30 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:26:38.413 17:25:30 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:38.413 17:25:30 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:38.413 17:25:30 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:26:38.413 17:25:30 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:26:38.413 17:25:30 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:26:38.413 17:25:30 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:26:38.413 17:25:30 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:26:38.413 17:25:30 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:26:38.413 17:25:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:26:38.413 17:25:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:26:38.413 17:25:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:26:38.671 17:25:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:26:38.671 17:25:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:26:38.671 17:25:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:26:38.671 17:25:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:26:38.671 17:25:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:26:38.671 17:25:31 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:26:38.671 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:38.671 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:38.930 17:25:31 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:26:38.930 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:38.930 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:38.930 17:25:31 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=107972 00:26:38.930 17:25:31 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:38.930 17:25:31 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 107972 00:26:38.930 17:25:31 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:38.930 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 107972 ']' 00:26:38.930 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:38.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:38.930 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:38.930 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:38.930 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:38.930 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:38.930 [2024-11-19 17:25:31.303906] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:26:38.930 [2024-11-19 17:25:31.304839] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:38.930 [2024-11-19 17:25:31.449997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:39.189 [2024-11-19 17:25:31.502632] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.189 [2024-11-19 17:25:31.502998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.189 [2024-11-19 17:25:31.503148] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:39.189 [2024-11-19 17:25:31.503195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:39.189 [2024-11-19 17:25:31.503221] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.189 [2024-11-19 17:25:31.504550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.189 [2024-11-19 17:25:31.504835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:39.189 [2024-11-19 17:25:31.504841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.189 [2024-11-19 17:25:31.504685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:39.189 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:39.189 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:26:39.189 17:25:31 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:26:39.189 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.189 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:39.189 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.189 17:25:31 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:26:39.189 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.189 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:39.189 [2024-11-19 17:25:31.694857] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:26:39.189 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.189 17:25:31 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:39.189 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.189 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:39.189 [2024-11-19 17:25:31.705401] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:39.189 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.189 17:25:31 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:26:39.189 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:39.189 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:39.447 17:25:31 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:26:39.447 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.447 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:39.447 Nvme0n1 00:26:39.447 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.447 17:25:31 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:26:39.447 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.447 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:39.447 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.447 17:25:31 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:39.447 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.447 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:39.447 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.447 17:25:31 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:39.447 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.447 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:39.447 [2024-11-19 17:25:31.844037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:39.447 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.447 17:25:31 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:26:39.447 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.447 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:39.447 [ 00:26:39.447 { 00:26:39.447 "allow_any_host": true, 00:26:39.447 "hosts": [], 00:26:39.447 "listen_addresses": [], 00:26:39.447 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:39.447 "subtype": "Discovery" 00:26:39.447 }, 00:26:39.447 { 00:26:39.447 "allow_any_host": true, 00:26:39.447 "hosts": [], 00:26:39.447 "listen_addresses": [ 00:26:39.447 { 00:26:39.447 "adrfam": "IPv4", 00:26:39.447 "traddr": "10.0.0.3", 00:26:39.447 "trsvcid": "4420", 00:26:39.447 "trtype": "TCP" 00:26:39.447 } 00:26:39.447 ], 00:26:39.447 "max_cntlid": 65519, 00:26:39.447 "max_namespaces": 1, 00:26:39.447 "min_cntlid": 1, 00:26:39.447 "model_number": "SPDK bdev Controller", 00:26:39.447 "namespaces": [ 00:26:39.447 { 00:26:39.447 "bdev_name": "Nvme0n1", 00:26:39.447 "name": "Nvme0n1", 00:26:39.447 "nguid": "7D71904AE44B4CC291007C0DC66D046E", 00:26:39.447 "nsid": 1, 00:26:39.447 "uuid": "7d71904a-e44b-4cc2-9100-7c0dc66d046e" 00:26:39.447 } 00:26:39.447 ], 00:26:39.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:39.447 "serial_number": "SPDK00000000000001", 00:26:39.447 "subtype": "NVMe" 00:26:39.447 } 00:26:39.447 ] 00:26:39.447 17:25:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.447 17:25:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:39.447 17:25:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:26:39.447 17:25:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:26:39.706 17:25:32 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:26:39.706 17:25:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:26:39.706 17:25:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:39.706 17:25:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:26:39.965 17:25:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:26:39.965 17:25:32 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:26:39.965 17:25:32 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:26:39.965 17:25:32 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:39.965 17:25:32 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.965 17:25:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:39.965 17:25:32 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.965 17:25:32 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:26:39.965 17:25:32 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:26:39.965 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:39.965 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:26:39.965 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:39.965 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:26:39.965 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:39.965 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:39.965 rmmod nvme_tcp 00:26:39.965 rmmod nvme_fabrics 00:26:39.965 rmmod nvme_keyring 00:26:39.965 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:39.965 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:26:39.965 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:26:39.965 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 107972 ']' 00:26:40.224 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 107972 00:26:40.224 17:25:32 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 107972 ']' 00:26:40.224 17:25:32 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 107972 00:26:40.224 17:25:32 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:26:40.224 17:25:32 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:40.224 17:25:32 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107972 00:26:40.224 17:25:32 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:40.224 17:25:32 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:40.224 killing process with pid 107972 00:26:40.224 17:25:32 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107972' 00:26:40.224 17:25:32 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 107972 00:26:40.224 17:25:32 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 107972 00:26:40.224 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:40.224 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:40.224 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:40.224 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:26:40.483 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:26:40.483 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:40.483 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:26:40.483 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:40.483 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:40.483 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:40.483 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:40.483 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:40.483 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:40.483 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:40.483 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:40.483 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:40.483 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:40.483 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:40.483 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:40.483 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:40.483 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:40.483 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:40.483 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:40.483 17:25:32 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.483 17:25:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:40.483 17:25:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.483 17:25:33 nvmf_identify_passthru -- nvmf/common.sh@300 -- # return 0 00:26:40.483 00:26:40.483 real 0m2.886s 00:26:40.483 user 0m5.389s 00:26:40.483 sys 0m0.953s 00:26:40.483 17:25:33 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:40.483 17:25:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:40.484 ************************************ 00:26:40.484 END TEST nvmf_identify_passthru 00:26:40.484 ************************************ 00:26:40.743 17:25:33 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:26:40.743 17:25:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:40.743 17:25:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:40.743 17:25:33 -- common/autotest_common.sh@10 -- # set +x 00:26:40.743 ************************************ 00:26:40.743 START TEST nvmf_dif 00:26:40.743 ************************************ 00:26:40.743 17:25:33 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:26:40.743 * Looking for test storage... 00:26:40.743 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:40.743 17:25:33 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:40.743 17:25:33 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:26:40.743 17:25:33 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:40.743 17:25:33 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:40.743 17:25:33 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:40.743 17:25:33 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:40.743 17:25:33 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:40.743 17:25:33 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:26:40.743 17:25:33 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:26:40.743 17:25:33 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:26:40.743 17:25:33 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:26:40.743 17:25:33 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:26:40.743 17:25:33 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:26:40.743 17:25:33 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:26:40.743 17:25:33 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:40.743 17:25:33 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:26:40.743 17:25:33 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:26:40.743 17:25:33 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:40.743 17:25:33 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:40.743 17:25:33 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:26:40.743 17:25:33 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:26:40.743 17:25:33 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:40.743 17:25:33 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:26:40.743 17:25:33 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:26:40.743 17:25:33 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:26:40.743 17:25:33 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:26:40.743 17:25:33 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:40.743 17:25:33 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:26:40.743 17:25:33 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:26:40.743 17:25:33 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:40.743 17:25:33 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:40.743 17:25:33 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:26:40.743 17:25:33 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:40.743 17:25:33 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:40.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.743 --rc genhtml_branch_coverage=1 00:26:40.743 --rc genhtml_function_coverage=1 00:26:40.743 --rc genhtml_legend=1 00:26:40.743 --rc geninfo_all_blocks=1 00:26:40.743 --rc geninfo_unexecuted_blocks=1 00:26:40.743 00:26:40.743 ' 00:26:40.743 17:25:33 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:40.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.743 --rc genhtml_branch_coverage=1 00:26:40.743 --rc genhtml_function_coverage=1 00:26:40.743 --rc genhtml_legend=1 00:26:40.743 --rc geninfo_all_blocks=1 00:26:40.743 --rc geninfo_unexecuted_blocks=1 00:26:40.743 00:26:40.743 ' 00:26:40.743 17:25:33 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:40.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.743 --rc genhtml_branch_coverage=1 00:26:40.743 --rc genhtml_function_coverage=1 00:26:40.743 --rc genhtml_legend=1 00:26:40.743 --rc geninfo_all_blocks=1 00:26:40.743 --rc geninfo_unexecuted_blocks=1 00:26:40.743 00:26:40.743 ' 00:26:40.743 17:25:33 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:40.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.743 --rc genhtml_branch_coverage=1 00:26:40.743 --rc genhtml_function_coverage=1 00:26:40.743 --rc genhtml_legend=1 00:26:40.743 --rc geninfo_all_blocks=1 00:26:40.743 --rc geninfo_unexecuted_blocks=1 00:26:40.743 00:26:40.743 ' 00:26:40.743 17:25:33 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:40.743 17:25:33 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:26:40.743 17:25:33 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:40.744 17:25:33 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:26:40.744 17:25:33 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:40.744 17:25:33 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:40.744 17:25:33 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:40.744 17:25:33 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.744 17:25:33 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.744 17:25:33 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.744 17:25:33 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:26:40.744 17:25:33 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:40.744 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:40.744 17:25:33 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:26:40.744 17:25:33 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:26:40.744 17:25:33 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:26:40.744 17:25:33 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:26:40.744 17:25:33 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.744 17:25:33 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:40.744 17:25:33 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:40.744 Cannot find device "nvmf_init_br" 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@162 -- # true 00:26:40.744 17:25:33 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:41.003 Cannot find device "nvmf_init_br2" 00:26:41.003 17:25:33 nvmf_dif -- nvmf/common.sh@163 -- # true 00:26:41.003 17:25:33 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:41.003 Cannot find device "nvmf_tgt_br" 00:26:41.003 17:25:33 nvmf_dif -- nvmf/common.sh@164 -- # true 00:26:41.003 17:25:33 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:41.003 Cannot find device "nvmf_tgt_br2" 00:26:41.003 17:25:33 nvmf_dif -- nvmf/common.sh@165 -- # true 00:26:41.003 17:25:33 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:41.003 Cannot find device "nvmf_init_br" 00:26:41.003 17:25:33 nvmf_dif -- nvmf/common.sh@166 -- # true 00:26:41.003 17:25:33 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:41.003 Cannot find device "nvmf_init_br2" 00:26:41.003 17:25:33 nvmf_dif -- nvmf/common.sh@167 -- # true 00:26:41.003 17:25:33 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:41.003 Cannot find device "nvmf_tgt_br" 00:26:41.003 17:25:33 nvmf_dif -- nvmf/common.sh@168 -- # true 00:26:41.003 17:25:33 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:41.003 Cannot find device "nvmf_tgt_br2" 00:26:41.003 17:25:33 nvmf_dif -- nvmf/common.sh@169 -- # true 00:26:41.003 17:25:33 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:41.003 Cannot find device "nvmf_br" 00:26:41.003 17:25:33 nvmf_dif -- nvmf/common.sh@170 -- # true 00:26:41.003 17:25:33 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:41.003 Cannot find device "nvmf_init_if" 00:26:41.003 17:25:33 nvmf_dif -- nvmf/common.sh@171 -- # true 00:26:41.003 17:25:33 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:41.004 Cannot find device "nvmf_init_if2" 00:26:41.004 17:25:33 nvmf_dif -- nvmf/common.sh@172 -- # true 00:26:41.004 17:25:33 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:41.004 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:41.004 17:25:33 nvmf_dif -- nvmf/common.sh@173 -- # true 00:26:41.004 17:25:33 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:41.004 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:41.004 17:25:33 nvmf_dif -- nvmf/common.sh@174 -- # true 00:26:41.004 17:25:33 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:41.004 17:25:33 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:41.004 17:25:33 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:41.004 17:25:33 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:41.004 17:25:33 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:41.004 17:25:33 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:41.004 17:25:33 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:41.004 17:25:33 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:41.004 17:25:33 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:41.004 17:25:33 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:41.004 17:25:33 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:41.004 17:25:33 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:41.004 17:25:33 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:41.004 17:25:33 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:41.004 17:25:33 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:41.004 17:25:33 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:41.004 17:25:33 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:41.004 17:25:33 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:41.263 17:25:33 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:41.263 17:25:33 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:41.263 17:25:33 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:41.263 17:25:33 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:41.263 17:25:33 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:41.263 17:25:33 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:41.263 17:25:33 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:41.263 17:25:33 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:41.263 17:25:33 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:41.263 17:25:33 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:41.263 17:25:33 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:41.263 17:25:33 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:41.263 17:25:33 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:41.263 17:25:33 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:41.263 17:25:33 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:41.263 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:41.263 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:26:41.263 00:26:41.263 --- 10.0.0.3 ping statistics --- 00:26:41.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.263 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:26:41.263 17:25:33 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:41.263 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:41.263 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.031 ms 00:26:41.263 00:26:41.263 --- 10.0.0.4 ping statistics --- 00:26:41.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.263 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:26:41.263 17:25:33 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:41.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:41.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:26:41.263 00:26:41.263 --- 10.0.0.1 ping statistics --- 00:26:41.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.263 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:26:41.263 17:25:33 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:41.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:41.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:26:41.263 00:26:41.263 --- 10.0.0.2 ping statistics --- 00:26:41.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.263 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:26:41.263 17:25:33 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:41.263 17:25:33 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:26:41.263 17:25:33 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:26:41.263 17:25:33 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:41.528 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:41.528 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:41.528 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:41.528 17:25:34 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:41.529 17:25:34 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:41.529 17:25:34 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:41.529 17:25:34 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:41.529 17:25:34 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:41.529 17:25:34 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:41.529 17:25:34 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:26:41.529 17:25:34 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:26:41.529 17:25:34 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:41.529 17:25:34 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:41.813 17:25:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:41.813 17:25:34 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=108350 00:26:41.813 17:25:34 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 108350 00:26:41.813 17:25:34 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:41.813 17:25:34 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 108350 ']' 00:26:41.813 17:25:34 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.813 17:25:34 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:41.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.813 17:25:34 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.813 17:25:34 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:41.813 17:25:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:41.813 [2024-11-19 17:25:34.115307] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:26:41.813 [2024-11-19 17:25:34.115379] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:41.813 [2024-11-19 17:25:34.261692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.813 [2024-11-19 17:25:34.322956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:41.813 [2024-11-19 17:25:34.323028] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:41.813 [2024-11-19 17:25:34.323043] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:41.813 [2024-11-19 17:25:34.323054] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:41.813 [2024-11-19 17:25:34.323063] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:41.813 [2024-11-19 17:25:34.323569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.754 17:25:35 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:42.754 17:25:35 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:26:42.754 17:25:35 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:42.754 17:25:35 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:42.754 17:25:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:42.754 17:25:35 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:42.754 17:25:35 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:26:42.754 17:25:35 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:26:42.754 17:25:35 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.754 17:25:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:42.754 [2024-11-19 17:25:35.125879] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:42.754 17:25:35 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.754 17:25:35 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:26:42.754 17:25:35 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:42.754 17:25:35 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:42.754 17:25:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:42.754 ************************************ 00:26:42.754 START TEST fio_dif_1_default 00:26:42.754 ************************************ 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:42.754 bdev_null0 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:42.754 [2024-11-19 17:25:35.169992] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:42.754 { 00:26:42.754 "params": { 00:26:42.754 "name": "Nvme$subsystem", 00:26:42.754 "trtype": "$TEST_TRANSPORT", 00:26:42.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:42.754 "adrfam": "ipv4", 00:26:42.754 "trsvcid": "$NVMF_PORT", 00:26:42.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:42.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:42.754 "hdgst": ${hdgst:-false}, 00:26:42.754 "ddgst": ${ddgst:-false} 00:26:42.754 }, 00:26:42.754 "method": "bdev_nvme_attach_controller" 00:26:42.754 } 00:26:42.754 EOF 00:26:42.754 )") 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:42.754 17:25:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:26:42.755 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:42.755 17:25:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:26:42.755 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:42.755 17:25:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:26:42.755 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:42.755 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:42.755 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:26:42.755 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:42.755 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:42.755 17:25:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:26:42.755 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:42.755 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:42.755 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:26:42.755 17:25:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:26:42.755 17:25:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:26:42.755 17:25:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:26:42.755 17:25:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:26:42.755 17:25:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:42.755 "params": { 00:26:42.755 "name": "Nvme0", 00:26:42.755 "trtype": "tcp", 00:26:42.755 "traddr": "10.0.0.3", 00:26:42.755 "adrfam": "ipv4", 00:26:42.755 "trsvcid": "4420", 00:26:42.755 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:42.755 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:42.755 "hdgst": false, 00:26:42.755 "ddgst": false 00:26:42.755 }, 00:26:42.755 "method": "bdev_nvme_attach_controller" 00:26:42.755 }' 00:26:42.755 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:42.755 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:42.755 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:42.755 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:26:42.755 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:42.755 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:42.755 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:42.755 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:42.755 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:42.755 17:25:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:43.013 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:43.013 fio-3.35 00:26:43.013 Starting 1 thread 00:26:55.222 00:26:55.222 filename0: (groupid=0, jobs=1): err= 0: pid=108435: Tue Nov 19 17:25:45 2024 00:26:55.222 read: IOPS=880, BW=3523KiB/s (3608kB/s)(34.5MiB/10032msec) 00:26:55.222 slat (nsec): min=5847, max=51790, avg=7237.18, stdev=2670.90 00:26:55.222 clat (usec): min=359, max=41410, avg=4519.32, stdev=12244.86 00:26:55.222 lat (usec): min=365, max=41419, avg=4526.56, stdev=12245.02 00:26:55.222 clat percentiles (usec): 00:26:55.222 | 1.00th=[ 367], 5.00th=[ 371], 10.00th=[ 375], 20.00th=[ 379], 00:26:55.222 | 30.00th=[ 383], 40.00th=[ 388], 50.00th=[ 392], 60.00th=[ 400], 00:26:55.222 | 70.00th=[ 404], 80.00th=[ 412], 90.00th=[40633], 95.00th=[40633], 00:26:55.222 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:26:55.222 | 99.99th=[41157] 00:26:55.222 bw ( KiB/s): min= 2208, max= 6368, per=100.00%, avg=3532.80, stdev=1077.57, samples=20 00:26:55.222 iops : min= 552, max= 1592, avg=883.20, stdev=269.39, samples=20 00:26:55.222 lat (usec) : 500=88.99%, 750=0.78% 00:26:55.222 lat (msec) : 10=0.05%, 50=10.19% 00:26:55.222 cpu : usr=91.39%, sys=7.93%, ctx=19, majf=0, minf=9 00:26:55.222 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:55.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.222 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.222 issued rwts: total=8836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.222 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:55.222 00:26:55.222 Run status group 0 (all jobs): 00:26:55.222 READ: bw=3523KiB/s (3608kB/s), 3523KiB/s-3523KiB/s (3608kB/s-3608kB/s), io=34.5MiB (36.2MB), run=10032-10032msec 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.222 00:26:55.222 real 0m11.095s 00:26:55.222 user 0m9.837s 00:26:55.222 sys 0m1.089s 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:55.222 ************************************ 00:26:55.222 END TEST fio_dif_1_default 00:26:55.222 ************************************ 00:26:55.222 17:25:46 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:55.222 17:25:46 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:55.222 17:25:46 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:55.222 17:25:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:55.222 ************************************ 00:26:55.222 START TEST fio_dif_1_multi_subsystems 00:26:55.222 ************************************ 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:55.222 bdev_null0 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:55.222 [2024-11-19 17:25:46.319412] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:55.222 bdev_null1 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:55.222 { 00:26:55.222 "params": { 00:26:55.222 "name": "Nvme$subsystem", 00:26:55.222 "trtype": "$TEST_TRANSPORT", 00:26:55.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:55.222 "adrfam": "ipv4", 00:26:55.222 "trsvcid": "$NVMF_PORT", 00:26:55.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:55.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:55.222 "hdgst": ${hdgst:-false}, 00:26:55.222 "ddgst": ${ddgst:-false} 00:26:55.222 }, 00:26:55.222 "method": "bdev_nvme_attach_controller" 00:26:55.222 } 00:26:55.222 EOF 00:26:55.222 )") 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:26:55.222 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:55.223 { 00:26:55.223 "params": { 00:26:55.223 "name": "Nvme$subsystem", 00:26:55.223 "trtype": "$TEST_TRANSPORT", 00:26:55.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:55.223 "adrfam": "ipv4", 00:26:55.223 "trsvcid": "$NVMF_PORT", 00:26:55.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:55.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:55.223 "hdgst": ${hdgst:-false}, 00:26:55.223 "ddgst": ${ddgst:-false} 00:26:55.223 }, 00:26:55.223 "method": "bdev_nvme_attach_controller" 00:26:55.223 } 00:26:55.223 EOF 00:26:55.223 )") 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:55.223 "params": { 00:26:55.223 "name": "Nvme0", 00:26:55.223 "trtype": "tcp", 00:26:55.223 "traddr": "10.0.0.3", 00:26:55.223 "adrfam": "ipv4", 00:26:55.223 "trsvcid": "4420", 00:26:55.223 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:55.223 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:55.223 "hdgst": false, 00:26:55.223 "ddgst": false 00:26:55.223 }, 00:26:55.223 "method": "bdev_nvme_attach_controller" 00:26:55.223 },{ 00:26:55.223 "params": { 00:26:55.223 "name": "Nvme1", 00:26:55.223 "trtype": "tcp", 00:26:55.223 "traddr": "10.0.0.3", 00:26:55.223 "adrfam": "ipv4", 00:26:55.223 "trsvcid": "4420", 00:26:55.223 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:55.223 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:55.223 "hdgst": false, 00:26:55.223 "ddgst": false 00:26:55.223 }, 00:26:55.223 "method": "bdev_nvme_attach_controller" 00:26:55.223 }' 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:55.223 17:25:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:55.223 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:55.223 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:55.223 fio-3.35 00:26:55.223 Starting 2 threads 00:27:05.200 00:27:05.201 filename0: (groupid=0, jobs=1): err= 0: pid=108595: Tue Nov 19 17:25:57 2024 00:27:05.201 read: IOPS=266, BW=1066KiB/s (1091kB/s)(10.4MiB/10013msec) 00:27:05.201 slat (nsec): min=5977, max=59245, avg=8628.20, stdev=5247.49 00:27:05.201 clat (usec): min=349, max=42391, avg=14984.18, stdev=19435.96 00:27:05.201 lat (usec): min=355, max=42400, avg=14992.81, stdev=19435.60 00:27:05.201 clat percentiles (usec): 00:27:05.201 | 1.00th=[ 355], 5.00th=[ 363], 10.00th=[ 367], 20.00th=[ 375], 00:27:05.201 | 30.00th=[ 383], 40.00th=[ 392], 50.00th=[ 408], 60.00th=[ 685], 00:27:05.201 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:27:05.201 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:27:05.201 | 99.99th=[42206] 00:27:05.201 bw ( KiB/s): min= 576, max= 1699, per=52.36%, avg=1065.75, stdev=249.59, samples=20 00:27:05.201 iops : min= 144, max= 424, avg=266.40, stdev=62.30, samples=20 00:27:05.201 lat (usec) : 500=58.73%, 750=3.67%, 1000=1.46% 00:27:05.201 lat (msec) : 2=0.15%, 50=35.98% 00:27:05.201 cpu : usr=97.57%, sys=2.04%, ctx=15, majf=0, minf=0 00:27:05.201 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:05.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.201 issued rwts: total=2668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.201 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:05.201 filename1: (groupid=0, jobs=1): err= 0: pid=108596: Tue Nov 19 17:25:57 2024 00:27:05.201 read: IOPS=242, BW=970KiB/s (993kB/s)(9728KiB/10030msec) 00:27:05.201 slat (nsec): min=5930, max=50647, avg=8848.86, stdev=5283.76 00:27:05.201 clat (usec): min=351, max=41467, avg=16468.56, stdev=19789.68 00:27:05.201 lat (usec): min=358, max=41490, avg=16477.41, stdev=19789.16 00:27:05.201 clat percentiles (usec): 00:27:05.201 | 1.00th=[ 359], 5.00th=[ 367], 10.00th=[ 375], 20.00th=[ 383], 00:27:05.201 | 30.00th=[ 400], 40.00th=[ 412], 50.00th=[ 445], 60.00th=[ 807], 00:27:05.201 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:27:05.201 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:27:05.201 | 99.99th=[41681] 00:27:05.201 bw ( KiB/s): min= 608, max= 2336, per=47.74%, avg=971.20, stdev=366.52, samples=20 00:27:05.201 iops : min= 152, max= 584, avg=242.80, stdev=91.63, samples=20 00:27:05.201 lat (usec) : 500=53.58%, 750=4.15%, 1000=2.47% 00:27:05.201 lat (msec) : 2=0.16%, 50=39.64% 00:27:05.201 cpu : usr=97.52%, sys=2.05%, ctx=21, majf=0, minf=0 00:27:05.201 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:05.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.201 issued rwts: total=2432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.201 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:05.201 00:27:05.201 Run status group 0 (all jobs): 00:27:05.201 READ: bw=2034KiB/s (2083kB/s), 970KiB/s-1066KiB/s (993kB/s-1091kB/s), io=19.9MiB (20.9MB), run=10013-10030msec 00:27:05.201 17:25:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:05.201 17:25:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:27:05.201 17:25:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:05.201 17:25:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:05.201 17:25:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:27:05.201 17:25:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:05.201 17:25:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.201 17:25:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:05.201 17:25:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.201 17:25:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:05.201 17:25:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.201 17:25:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:05.201 17:25:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.201 17:25:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:05.201 17:25:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:05.201 17:25:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:27:05.201 17:25:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:05.201 17:25:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.201 17:25:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:05.201 17:25:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.201 17:25:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:05.201 17:25:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.201 17:25:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:05.201 17:25:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.201 00:27:05.201 real 0m11.268s 00:27:05.201 user 0m20.428s 00:27:05.201 sys 0m0.709s 00:27:05.201 17:25:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:05.201 17:25:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:05.201 ************************************ 00:27:05.201 END TEST fio_dif_1_multi_subsystems 00:27:05.201 ************************************ 00:27:05.201 17:25:57 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:05.201 17:25:57 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:05.201 17:25:57 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:05.201 17:25:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:05.201 ************************************ 00:27:05.201 START TEST fio_dif_rand_params 00:27:05.201 ************************************ 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:05.201 bdev_null0 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:05.201 [2024-11-19 17:25:57.640894] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:05.201 { 00:27:05.201 "params": { 00:27:05.201 "name": "Nvme$subsystem", 00:27:05.201 "trtype": "$TEST_TRANSPORT", 00:27:05.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:05.201 "adrfam": "ipv4", 00:27:05.201 "trsvcid": "$NVMF_PORT", 00:27:05.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:05.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:05.201 "hdgst": ${hdgst:-false}, 00:27:05.201 "ddgst": ${ddgst:-false} 00:27:05.201 }, 00:27:05.201 "method": "bdev_nvme_attach_controller" 00:27:05.201 } 00:27:05.201 EOF 00:27:05.201 )") 00:27:05.201 17:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:05.202 "params": { 00:27:05.202 "name": "Nvme0", 00:27:05.202 "trtype": "tcp", 00:27:05.202 "traddr": "10.0.0.3", 00:27:05.202 "adrfam": "ipv4", 00:27:05.202 "trsvcid": "4420", 00:27:05.202 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:05.202 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:05.202 "hdgst": false, 00:27:05.202 "ddgst": false 00:27:05.202 }, 00:27:05.202 "method": "bdev_nvme_attach_controller" 00:27:05.202 }' 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:05.202 17:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:05.461 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:05.461 ... 00:27:05.461 fio-3.35 00:27:05.461 Starting 3 threads 00:27:12.026 00:27:12.026 filename0: (groupid=0, jobs=1): err= 0: pid=108752: Tue Nov 19 17:26:03 2024 00:27:12.026 read: IOPS=253, BW=31.7MiB/s (33.3MB/s)(159MiB/5005msec) 00:27:12.026 slat (nsec): min=5939, max=57157, avg=11924.36, stdev=5584.95 00:27:12.026 clat (usec): min=3414, max=50077, avg=11802.71, stdev=12376.45 00:27:12.026 lat (usec): min=3423, max=50084, avg=11814.64, stdev=12376.07 00:27:12.026 clat percentiles (usec): 00:27:12.026 | 1.00th=[ 3490], 5.00th=[ 5735], 10.00th=[ 6063], 20.00th=[ 6390], 00:27:12.026 | 30.00th=[ 6783], 40.00th=[ 7635], 50.00th=[ 8094], 60.00th=[ 8455], 00:27:12.026 | 70.00th=[ 8717], 80.00th=[ 9110], 90.00th=[45876], 95.00th=[47973], 00:27:12.026 | 99.00th=[49546], 99.50th=[50070], 99.90th=[50070], 99.95th=[50070], 00:27:12.026 | 99.99th=[50070] 00:27:12.026 bw ( KiB/s): min=22528, max=42496, per=28.50%, avg=32460.80, stdev=5604.52, samples=10 00:27:12.026 iops : min= 176, max= 332, avg=253.60, stdev=43.79, samples=10 00:27:12.026 lat (msec) : 4=1.18%, 10=87.80%, 20=0.63%, 50=10.16%, 100=0.24% 00:27:12.026 cpu : usr=94.02%, sys=4.44%, ctx=10, majf=0, minf=0 00:27:12.026 IO depths : 1=6.9%, 2=93.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:12.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.026 issued rwts: total=1270,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.026 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:12.026 filename0: (groupid=0, jobs=1): err= 0: pid=108753: Tue Nov 19 17:26:03 2024 00:27:12.026 read: IOPS=386, BW=48.4MiB/s (50.7MB/s)(242MiB/5004msec) 00:27:12.026 slat (nsec): min=5884, max=65636, avg=10938.83, stdev=5699.12 00:27:12.026 clat (usec): min=2893, max=48879, avg=7731.08, stdev=3587.48 00:27:12.026 lat (usec): min=2904, max=48888, avg=7742.02, stdev=3588.25 00:27:12.026 clat percentiles (usec): 00:27:12.026 | 1.00th=[ 3425], 5.00th=[ 3458], 10.00th=[ 3490], 20.00th=[ 3752], 00:27:12.026 | 30.00th=[ 6718], 40.00th=[ 6980], 50.00th=[ 7242], 60.00th=[ 7701], 00:27:12.026 | 70.00th=[ 9241], 80.00th=[11076], 90.00th=[11600], 95.00th=[11994], 00:27:12.026 | 99.00th=[12780], 99.50th=[13698], 99.90th=[47973], 99.95th=[49021], 00:27:12.026 | 99.99th=[49021] 00:27:12.026 bw ( KiB/s): min=43776, max=60672, per=44.30%, avg=50460.44, stdev=6386.47, samples=9 00:27:12.026 iops : min= 342, max= 474, avg=394.22, stdev=49.89, samples=9 00:27:12.026 lat (msec) : 4=20.82%, 10=51.19%, 20=27.69%, 50=0.31% 00:27:12.026 cpu : usr=92.26%, sys=5.74%, ctx=10, majf=0, minf=0 00:27:12.026 IO depths : 1=20.8%, 2=79.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:12.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.027 issued rwts: total=1936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.027 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:12.027 filename0: (groupid=0, jobs=1): err= 0: pid=108754: Tue Nov 19 17:26:03 2024 00:27:12.027 read: IOPS=249, BW=31.2MiB/s (32.7MB/s)(156MiB/5003msec) 00:27:12.027 slat (nsec): min=5676, max=59750, avg=14201.54, stdev=6567.75 00:27:12.027 clat (usec): min=3263, max=52591, avg=12005.16, stdev=11847.21 00:27:12.027 lat (usec): min=3273, max=52610, avg=12019.36, stdev=11847.04 00:27:12.027 clat percentiles (usec): 00:27:12.027 | 1.00th=[ 4883], 5.00th=[ 5604], 10.00th=[ 5932], 20.00th=[ 6259], 00:27:12.027 | 30.00th=[ 6521], 40.00th=[ 7046], 50.00th=[ 9372], 60.00th=[ 9896], 00:27:12.027 | 70.00th=[10290], 80.00th=[10814], 90.00th=[12256], 95.00th=[48497], 00:27:12.027 | 99.00th=[51119], 99.50th=[51119], 99.90th=[52691], 99.95th=[52691], 00:27:12.027 | 99.99th=[52691] 00:27:12.027 bw ( KiB/s): min=20224, max=43008, per=27.51%, avg=31338.11, stdev=8313.13, samples=9 00:27:12.027 iops : min= 158, max= 336, avg=244.78, stdev=64.93, samples=9 00:27:12.027 lat (msec) : 4=0.32%, 10=61.70%, 20=28.85%, 50=6.17%, 100=2.96% 00:27:12.027 cpu : usr=95.38%, sys=3.42%, ctx=27, majf=0, minf=0 00:27:12.027 IO depths : 1=2.4%, 2=97.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:12.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.027 issued rwts: total=1248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.027 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:12.027 00:27:12.027 Run status group 0 (all jobs): 00:27:12.027 READ: bw=111MiB/s (117MB/s), 31.2MiB/s-48.4MiB/s (32.7MB/s-50.7MB/s), io=557MiB (584MB), run=5003-5005msec 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.027 bdev_null0 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.027 [2024-11-19 17:26:03.725526] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.027 bdev_null1 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.027 bdev_null2 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:12.027 17:26:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:12.027 { 00:27:12.027 "params": { 00:27:12.027 "name": "Nvme$subsystem", 00:27:12.027 "trtype": "$TEST_TRANSPORT", 00:27:12.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.027 "adrfam": "ipv4", 00:27:12.027 "trsvcid": "$NVMF_PORT", 00:27:12.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.027 "hdgst": ${hdgst:-false}, 00:27:12.027 "ddgst": ${ddgst:-false} 00:27:12.027 }, 00:27:12.027 "method": "bdev_nvme_attach_controller" 00:27:12.027 } 00:27:12.027 EOF 00:27:12.027 )") 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:12.028 { 00:27:12.028 "params": { 00:27:12.028 "name": "Nvme$subsystem", 00:27:12.028 "trtype": "$TEST_TRANSPORT", 00:27:12.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.028 "adrfam": "ipv4", 00:27:12.028 "trsvcid": "$NVMF_PORT", 00:27:12.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.028 "hdgst": ${hdgst:-false}, 00:27:12.028 "ddgst": ${ddgst:-false} 00:27:12.028 }, 00:27:12.028 "method": "bdev_nvme_attach_controller" 00:27:12.028 } 00:27:12.028 EOF 00:27:12.028 )") 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:12.028 { 00:27:12.028 "params": { 00:27:12.028 "name": "Nvme$subsystem", 00:27:12.028 "trtype": "$TEST_TRANSPORT", 00:27:12.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.028 "adrfam": "ipv4", 00:27:12.028 "trsvcid": "$NVMF_PORT", 00:27:12.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.028 "hdgst": ${hdgst:-false}, 00:27:12.028 "ddgst": ${ddgst:-false} 00:27:12.028 }, 00:27:12.028 "method": "bdev_nvme_attach_controller" 00:27:12.028 } 00:27:12.028 EOF 00:27:12.028 )") 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:12.028 "params": { 00:27:12.028 "name": "Nvme0", 00:27:12.028 "trtype": "tcp", 00:27:12.028 "traddr": "10.0.0.3", 00:27:12.028 "adrfam": "ipv4", 00:27:12.028 "trsvcid": "4420", 00:27:12.028 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:12.028 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:12.028 "hdgst": false, 00:27:12.028 "ddgst": false 00:27:12.028 }, 00:27:12.028 "method": "bdev_nvme_attach_controller" 00:27:12.028 },{ 00:27:12.028 "params": { 00:27:12.028 "name": "Nvme1", 00:27:12.028 "trtype": "tcp", 00:27:12.028 "traddr": "10.0.0.3", 00:27:12.028 "adrfam": "ipv4", 00:27:12.028 "trsvcid": "4420", 00:27:12.028 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:12.028 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:12.028 "hdgst": false, 00:27:12.028 "ddgst": false 00:27:12.028 }, 00:27:12.028 "method": "bdev_nvme_attach_controller" 00:27:12.028 },{ 00:27:12.028 "params": { 00:27:12.028 "name": "Nvme2", 00:27:12.028 "trtype": "tcp", 00:27:12.028 "traddr": "10.0.0.3", 00:27:12.028 "adrfam": "ipv4", 00:27:12.028 "trsvcid": "4420", 00:27:12.028 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:12.028 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:12.028 "hdgst": false, 00:27:12.028 "ddgst": false 00:27:12.028 }, 00:27:12.028 "method": "bdev_nvme_attach_controller" 00:27:12.028 }' 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:12.028 17:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:12.028 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:12.028 ... 00:27:12.028 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:12.028 ... 00:27:12.028 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:12.028 ... 00:27:12.028 fio-3.35 00:27:12.028 Starting 24 threads 00:27:33.963 00:27:33.963 filename0: (groupid=0, jobs=1): err= 0: pid=108849: Tue Nov 19 17:26:25 2024 00:27:33.963 read: IOPS=562, BW=2248KiB/s (2302kB/s)(22.0MiB/10003msec) 00:27:33.963 slat (usec): min=3, max=5063, avg=25.57, stdev=196.30 00:27:33.963 clat (msec): min=5, max=104, avg=28.30, stdev=13.59 00:27:33.963 lat (msec): min=5, max=104, avg=28.32, stdev=13.58 00:27:33.963 clat percentiles (msec): 00:27:33.963 | 1.00th=[ 10], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 17], 00:27:33.963 | 30.00th=[ 20], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 26], 00:27:33.963 | 70.00th=[ 33], 80.00th=[ 40], 90.00th=[ 48], 95.00th=[ 55], 00:27:33.963 | 99.00th=[ 72], 99.50th=[ 83], 99.90th=[ 89], 99.95th=[ 105], 00:27:33.963 | 99.99th=[ 105] 00:27:33.963 bw ( KiB/s): min= 1072, max= 3392, per=3.46%, avg=2239.32, stdev=766.17, samples=19 00:27:33.963 iops : min= 268, max= 848, avg=559.79, stdev=191.52, samples=19 00:27:33.963 lat (msec) : 10=1.17%, 20=29.78%, 50=61.70%, 100=7.26%, 250=0.09% 00:27:33.963 cpu : usr=46.94%, sys=1.00%, ctx=1357, majf=0, minf=0 00:27:33.963 IO depths : 1=2.3%, 2=5.1%, 4=14.0%, 8=67.9%, 16=10.7%, 32=0.0%, >=64=0.0% 00:27:33.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.963 complete : 0=0.0%, 4=91.3%, 8=3.6%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.963 issued rwts: total=5622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.963 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:33.963 filename0: (groupid=0, jobs=1): err= 0: pid=108850: Tue Nov 19 17:26:25 2024 00:27:33.963 read: IOPS=591, BW=2366KiB/s (2423kB/s)(23.1MiB/10002msec) 00:27:33.963 slat (usec): min=3, max=8018, avg=21.00, stdev=169.99 00:27:33.963 clat (usec): min=1910, max=122488, avg=26888.74, stdev=14923.15 00:27:33.963 lat (usec): min=1917, max=122497, avg=26909.74, stdev=14927.10 00:27:33.963 clat percentiles (msec): 00:27:33.963 | 1.00th=[ 9], 5.00th=[ 12], 10.00th=[ 14], 20.00th=[ 17], 00:27:33.963 | 30.00th=[ 18], 40.00th=[ 20], 50.00th=[ 23], 60.00th=[ 24], 00:27:33.963 | 70.00th=[ 31], 80.00th=[ 38], 90.00th=[ 46], 95.00th=[ 58], 00:27:33.963 | 99.00th=[ 85], 99.50th=[ 86], 99.90th=[ 101], 99.95th=[ 123], 00:27:33.963 | 99.99th=[ 123] 00:27:33.963 bw ( KiB/s): min= 816, max= 4784, per=3.60%, avg=2330.16, stdev=943.20, samples=19 00:27:33.963 iops : min= 204, max= 1196, avg=582.47, stdev=235.87, samples=19 00:27:33.963 lat (msec) : 2=0.24%, 4=0.03%, 10=1.44%, 20=39.99%, 50=51.17% 00:27:33.963 lat (msec) : 100=6.98%, 250=0.15% 00:27:33.963 cpu : usr=62.96%, sys=1.13%, ctx=1459, majf=0, minf=0 00:27:33.963 IO depths : 1=3.2%, 2=6.8%, 4=17.2%, 8=63.2%, 16=9.7%, 32=0.0%, >=64=0.0% 00:27:33.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.963 complete : 0=0.0%, 4=91.9%, 8=2.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.963 issued rwts: total=5917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.963 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:33.963 filename0: (groupid=0, jobs=1): err= 0: pid=108851: Tue Nov 19 17:26:25 2024 00:27:33.963 read: IOPS=580, BW=2321KiB/s (2377kB/s)(22.7MiB/10008msec) 00:27:33.963 slat (usec): min=6, max=4084, avg=21.66, stdev=174.82 00:27:33.963 clat (usec): min=6931, max=95838, avg=27411.31, stdev=14390.82 00:27:33.963 lat (usec): min=6939, max=95861, avg=27432.97, stdev=14392.28 00:27:33.963 clat percentiles (usec): 00:27:33.963 | 1.00th=[ 9241], 5.00th=[13698], 10.00th=[15139], 20.00th=[16057], 00:27:33.963 | 30.00th=[17695], 40.00th=[21890], 50.00th=[23725], 60.00th=[24511], 00:27:33.963 | 70.00th=[30802], 80.00th=[38536], 90.00th=[46924], 95.00th=[55837], 00:27:33.963 | 99.00th=[87557], 99.50th=[92799], 99.90th=[94897], 99.95th=[95945], 00:27:33.963 | 99.99th=[95945] 00:27:33.963 bw ( KiB/s): min= 856, max= 3760, per=3.56%, avg=2299.47, stdev=786.56, samples=19 00:27:33.963 iops : min= 214, max= 940, avg=574.79, stdev=196.71, samples=19 00:27:33.963 lat (msec) : 10=1.29%, 20=36.04%, 50=56.13%, 100=6.54% 00:27:33.963 cpu : usr=48.64%, sys=0.86%, ctx=1295, majf=0, minf=0 00:27:33.963 IO depths : 1=1.9%, 2=4.1%, 4=13.0%, 8=69.9%, 16=11.1%, 32=0.0%, >=64=0.0% 00:27:33.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.963 complete : 0=0.0%, 4=90.6%, 8=4.2%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.963 issued rwts: total=5808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.963 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:33.963 filename0: (groupid=0, jobs=1): err= 0: pid=108852: Tue Nov 19 17:26:25 2024 00:27:33.963 read: IOPS=571, BW=2285KiB/s (2340kB/s)(22.3MiB/10007msec) 00:27:33.963 slat (usec): min=3, max=4060, avg=19.38, stdev=131.19 00:27:33.963 clat (usec): min=6422, max=99641, avg=27870.87, stdev=13947.80 00:27:33.963 lat (usec): min=6435, max=99653, avg=27890.25, stdev=13945.58 00:27:33.963 clat percentiles (msec): 00:27:33.963 | 1.00th=[ 11], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 18], 00:27:33.963 | 30.00th=[ 20], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 25], 00:27:33.963 | 70.00th=[ 31], 80.00th=[ 40], 90.00th=[ 48], 95.00th=[ 56], 00:27:33.963 | 99.00th=[ 84], 99.50th=[ 87], 99.90th=[ 91], 99.95th=[ 101], 00:27:33.963 | 99.99th=[ 101] 00:27:33.963 bw ( KiB/s): min= 896, max= 3456, per=3.52%, avg=2278.21, stdev=703.18, samples=19 00:27:33.963 iops : min= 224, max= 864, avg=569.47, stdev=175.89, samples=19 00:27:33.963 lat (msec) : 10=0.58%, 20=30.65%, 50=62.34%, 100=6.44% 00:27:33.963 cpu : usr=50.39%, sys=0.96%, ctx=1507, majf=0, minf=0 00:27:33.963 IO depths : 1=2.6%, 2=5.5%, 4=14.9%, 8=66.6%, 16=10.4%, 32=0.0%, >=64=0.0% 00:27:33.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.963 complete : 0=0.0%, 4=91.3%, 8=3.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.963 issued rwts: total=5717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.963 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:33.963 filename0: (groupid=0, jobs=1): err= 0: pid=108853: Tue Nov 19 17:26:25 2024 00:27:33.963 read: IOPS=831, BW=3324KiB/s (3404kB/s)(32.5MiB/10003msec) 00:27:33.963 slat (usec): min=5, max=8030, avg=21.87, stdev=283.87 00:27:33.963 clat (usec): min=1126, max=56156, avg=19091.19, stdev=8173.22 00:27:33.963 lat (usec): min=1134, max=56165, avg=19113.05, stdev=8175.81 00:27:33.963 clat percentiles (usec): 00:27:33.963 | 1.00th=[ 1434], 5.00th=[ 3261], 10.00th=[ 9503], 20.00th=[11863], 00:27:33.963 | 30.00th=[13435], 40.00th=[16581], 50.00th=[21365], 60.00th=[23200], 00:27:33.963 | 70.00th=[23987], 80.00th=[23987], 90.00th=[27395], 95.00th=[33162], 00:27:33.963 | 99.00th=[38536], 99.50th=[41681], 99.90th=[50070], 99.95th=[50070], 00:27:33.963 | 99.99th=[56361] 00:27:33.963 bw ( KiB/s): min= 2480, max= 8128, per=5.22%, avg=3377.47, stdev=1276.08, samples=19 00:27:33.963 iops : min= 620, max= 2032, avg=844.37, stdev=319.02, samples=19 00:27:33.963 lat (msec) : 2=1.89%, 4=4.61%, 10=5.63%, 20=31.50%, 50=56.26% 00:27:33.963 lat (msec) : 100=0.11% 00:27:33.963 cpu : usr=32.16%, sys=1.05%, ctx=1239, majf=0, minf=9 00:27:33.963 IO depths : 1=1.7%, 2=3.6%, 4=12.0%, 8=71.5%, 16=11.2%, 32=0.0%, >=64=0.0% 00:27:33.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.963 complete : 0=0.0%, 4=90.4%, 8=4.4%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.963 issued rwts: total=8313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.963 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:33.963 filename0: (groupid=0, jobs=1): err= 0: pid=108854: Tue Nov 19 17:26:25 2024 00:27:33.963 read: IOPS=541, BW=2166KiB/s (2218kB/s)(21.2MiB/10001msec) 00:27:33.963 slat (usec): min=6, max=4032, avg=19.89, stdev=136.46 00:27:33.963 clat (msec): min=3, max=104, avg=29.39, stdev=14.52 00:27:33.963 lat (msec): min=3, max=104, avg=29.41, stdev=14.52 00:27:33.963 clat percentiles (msec): 00:27:33.963 | 1.00th=[ 10], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 18], 00:27:33.963 | 30.00th=[ 22], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 27], 00:27:33.963 | 70.00th=[ 34], 80.00th=[ 41], 90.00th=[ 48], 95.00th=[ 56], 00:27:33.963 | 99.00th=[ 83], 99.50th=[ 93], 99.90th=[ 95], 99.95th=[ 105], 00:27:33.963 | 99.99th=[ 105] 00:27:33.963 bw ( KiB/s): min= 896, max= 3504, per=3.33%, avg=2151.95, stdev=716.98, samples=19 00:27:33.963 iops : min= 224, max= 876, avg=537.95, stdev=179.27, samples=19 00:27:33.963 lat (msec) : 4=0.30%, 10=0.96%, 20=26.72%, 50=64.54%, 100=7.42% 00:27:33.963 lat (msec) : 250=0.06% 00:27:33.963 cpu : usr=49.59%, sys=0.88%, ctx=1351, majf=0, minf=0 00:27:33.963 IO depths : 1=3.3%, 2=7.1%, 4=18.1%, 8=62.0%, 16=9.5%, 32=0.0%, >=64=0.0% 00:27:33.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.963 complete : 0=0.0%, 4=92.0%, 8=2.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.963 issued rwts: total=5415,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.963 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:33.963 filename0: (groupid=0, jobs=1): err= 0: pid=108855: Tue Nov 19 17:26:25 2024 00:27:33.963 read: IOPS=939, BW=3757KiB/s (3848kB/s)(36.8MiB/10022msec) 00:27:33.963 slat (usec): min=6, max=4021, avg=14.27, stdev=96.59 00:27:33.963 clat (usec): min=1235, max=60807, avg=16934.72, stdev=8566.03 00:27:33.963 lat (usec): min=1242, max=60815, avg=16948.99, stdev=8568.38 00:27:33.963 clat percentiles (usec): 00:27:33.963 | 1.00th=[ 1467], 5.00th=[ 2442], 10.00th=[ 7177], 20.00th=[ 8979], 00:27:33.963 | 30.00th=[11338], 40.00th=[14615], 50.00th=[15926], 60.00th=[20055], 00:27:33.963 | 70.00th=[23462], 80.00th=[23987], 90.00th=[25035], 95.00th=[32113], 00:27:33.963 | 99.00th=[38536], 99.50th=[44303], 99.90th=[47973], 99.95th=[54264], 00:27:33.963 | 99.99th=[60556] 00:27:33.963 bw ( KiB/s): min= 2032, max= 9144, per=5.81%, avg=3758.40, stdev=1793.45, samples=20 00:27:33.963 iops : min= 508, max= 2286, avg=939.70, stdev=448.36, samples=20 00:27:33.963 lat (msec) : 2=3.56%, 4=3.98%, 10=17.13%, 20=35.30%, 50=39.94% 00:27:33.963 lat (msec) : 100=0.08% 00:27:33.963 cpu : usr=37.74%, sys=0.94%, ctx=1077, majf=0, minf=0 00:27:33.963 IO depths : 1=1.6%, 2=3.3%, 4=10.6%, 8=73.2%, 16=11.2%, 32=0.0%, >=64=0.0% 00:27:33.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.964 complete : 0=0.0%, 4=90.3%, 8=4.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.964 issued rwts: total=9414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.964 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:33.964 filename0: (groupid=0, jobs=1): err= 0: pid=108856: Tue Nov 19 17:26:25 2024 00:27:33.964 read: IOPS=568, BW=2273KiB/s (2327kB/s)(22.2MiB/10009msec) 00:27:33.964 slat (usec): min=3, max=4047, avg=18.42, stdev=130.75 00:27:33.964 clat (msec): min=7, max=105, avg=28.04, stdev=13.90 00:27:33.964 lat (msec): min=7, max=105, avg=28.05, stdev=13.90 00:27:33.964 clat percentiles (msec): 00:27:33.964 | 1.00th=[ 12], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 18], 00:27:33.964 | 30.00th=[ 20], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 25], 00:27:33.964 | 70.00th=[ 32], 80.00th=[ 40], 90.00th=[ 48], 95.00th=[ 55], 00:27:33.964 | 99.00th=[ 83], 99.50th=[ 92], 99.90th=[ 93], 99.95th=[ 93], 00:27:33.964 | 99.99th=[ 106] 00:27:33.964 bw ( KiB/s): min= 784, max= 3440, per=3.52%, avg=2276.32, stdev=730.63, samples=19 00:27:33.964 iops : min= 196, max= 860, avg=569.05, stdev=182.69, samples=19 00:27:33.964 lat (msec) : 10=0.39%, 20=32.00%, 50=61.46%, 100=6.12%, 250=0.04% 00:27:33.964 cpu : usr=47.75%, sys=1.09%, ctx=1419, majf=0, minf=0 00:27:33.964 IO depths : 1=2.3%, 2=5.0%, 4=13.6%, 8=68.3%, 16=10.7%, 32=0.0%, >=64=0.0% 00:27:33.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.964 complete : 0=0.0%, 4=91.1%, 8=3.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.964 issued rwts: total=5687,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.964 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:33.964 filename1: (groupid=0, jobs=1): err= 0: pid=108857: Tue Nov 19 17:26:25 2024 00:27:33.964 read: IOPS=689, BW=2759KiB/s (2825kB/s)(27.0MiB/10006msec) 00:27:33.964 slat (usec): min=3, max=8030, avg=17.72, stdev=210.24 00:27:33.964 clat (usec): min=1304, max=95949, avg=23086.89, stdev=12541.51 00:27:33.964 lat (usec): min=1311, max=95970, avg=23104.62, stdev=12542.56 00:27:33.964 clat percentiles (usec): 00:27:33.964 | 1.00th=[ 2245], 5.00th=[ 9372], 10.00th=[11469], 20.00th=[14615], 00:27:33.964 | 30.00th=[15926], 40.00th=[18482], 50.00th=[21365], 60.00th=[23200], 00:27:33.964 | 70.00th=[23987], 80.00th=[29230], 90.00th=[39584], 95.00th=[47973], 00:27:33.964 | 99.00th=[67634], 99.50th=[76022], 99.90th=[95945], 99.95th=[95945], 00:27:33.964 | 99.99th=[95945] 00:27:33.964 bw ( KiB/s): min= 1248, max= 5248, per=4.14%, avg=2674.11, stdev=918.87, samples=19 00:27:33.964 iops : min= 312, max= 1312, avg=668.53, stdev=229.72, samples=19 00:27:33.964 lat (msec) : 2=0.71%, 4=2.14%, 10=3.82%, 20=38.68%, 50=50.42% 00:27:33.964 lat (msec) : 100=4.22% 00:27:33.964 cpu : usr=44.71%, sys=0.96%, ctx=1243, majf=0, minf=9 00:27:33.964 IO depths : 1=1.3%, 2=2.6%, 4=9.2%, 8=74.9%, 16=12.0%, 32=0.0%, >=64=0.0% 00:27:33.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.964 complete : 0=0.0%, 4=90.1%, 8=5.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.964 issued rwts: total=6902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.964 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:33.964 filename1: (groupid=0, jobs=1): err= 0: pid=108858: Tue Nov 19 17:26:25 2024 00:27:33.964 read: IOPS=584, BW=2338KiB/s (2394kB/s)(22.9MiB/10010msec) 00:27:33.964 slat (usec): min=6, max=6019, avg=20.46, stdev=151.49 00:27:33.964 clat (usec): min=9752, max=80981, avg=27236.42, stdev=11731.32 00:27:33.964 lat (usec): min=9774, max=80994, avg=27256.88, stdev=11731.88 00:27:33.964 clat percentiles (usec): 00:27:33.964 | 1.00th=[11600], 5.00th=[15008], 10.00th=[15926], 20.00th=[17171], 00:27:33.964 | 30.00th=[18744], 40.00th=[21365], 50.00th=[23725], 60.00th=[25822], 00:27:33.964 | 70.00th=[32113], 80.00th=[38536], 90.00th=[42206], 95.00th=[47973], 00:27:33.964 | 99.00th=[63701], 99.50th=[70779], 99.90th=[81265], 99.95th=[81265], 00:27:33.964 | 99.99th=[81265] 00:27:33.964 bw ( KiB/s): min= 1254, max= 3376, per=3.61%, avg=2331.05, stdev=694.26, samples=19 00:27:33.964 iops : min= 313, max= 844, avg=582.68, stdev=173.65, samples=19 00:27:33.964 lat (msec) : 10=0.09%, 20=37.50%, 50=58.36%, 100=4.05% 00:27:33.964 cpu : usr=64.78%, sys=1.15%, ctx=1162, majf=0, minf=0 00:27:33.964 IO depths : 1=3.4%, 2=7.1%, 4=17.1%, 8=63.0%, 16=9.5%, 32=0.0%, >=64=0.0% 00:27:33.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.964 complete : 0=0.0%, 4=92.0%, 8=2.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.964 issued rwts: total=5850,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.964 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:33.964 filename1: (groupid=0, jobs=1): err= 0: pid=108859: Tue Nov 19 17:26:25 2024 00:27:33.964 read: IOPS=613, BW=2455KiB/s (2514kB/s)(24.0MiB/10016msec) 00:27:33.964 slat (usec): min=4, max=4936, avg=18.45, stdev=140.15 00:27:33.964 clat (usec): min=1372, max=119144, avg=25942.38, stdev=12855.85 00:27:33.964 lat (usec): min=1378, max=119171, avg=25960.83, stdev=12854.75 00:27:33.964 clat percentiles (msec): 00:27:33.964 | 1.00th=[ 11], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 17], 00:27:33.964 | 30.00th=[ 18], 40.00th=[ 21], 50.00th=[ 24], 60.00th=[ 24], 00:27:33.964 | 70.00th=[ 27], 80.00th=[ 34], 90.00th=[ 44], 95.00th=[ 51], 00:27:33.964 | 99.00th=[ 71], 99.50th=[ 72], 99.90th=[ 95], 99.95th=[ 120], 00:27:33.964 | 99.99th=[ 120] 00:27:33.964 bw ( KiB/s): min= 1032, max= 3344, per=3.78%, avg=2443.95, stdev=688.94, samples=19 00:27:33.964 iops : min= 258, max= 836, avg=610.95, stdev=172.23, samples=19 00:27:33.964 lat (msec) : 2=0.26%, 10=0.60%, 20=38.71%, 50=54.86%, 100=5.48% 00:27:33.964 lat (msec) : 250=0.08% 00:27:33.964 cpu : usr=46.95%, sys=0.74%, ctx=1262, majf=0, minf=0 00:27:33.964 IO depths : 1=1.5%, 2=3.4%, 4=11.0%, 8=72.4%, 16=11.6%, 32=0.0%, >=64=0.0% 00:27:33.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.964 complete : 0=0.0%, 4=90.5%, 8=4.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.964 issued rwts: total=6148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.964 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:33.964 filename1: (groupid=0, jobs=1): err= 0: pid=108860: Tue Nov 19 17:26:25 2024 00:27:33.964 read: IOPS=577, BW=2311KiB/s (2366kB/s)(22.6MiB/10006msec) 00:27:33.964 slat (usec): min=4, max=4069, avg=21.39, stdev=148.22 00:27:33.964 clat (msec): min=6, max=112, avg=27.50, stdev=14.38 00:27:33.964 lat (msec): min=6, max=112, avg=27.53, stdev=14.38 00:27:33.964 clat percentiles (msec): 00:27:33.964 | 1.00th=[ 11], 5.00th=[ 14], 10.00th=[ 16], 20.00th=[ 17], 00:27:33.964 | 30.00th=[ 18], 40.00th=[ 20], 50.00th=[ 23], 60.00th=[ 25], 00:27:33.964 | 70.00th=[ 32], 80.00th=[ 40], 90.00th=[ 48], 95.00th=[ 58], 00:27:33.964 | 99.00th=[ 72], 99.50th=[ 79], 99.90th=[ 94], 99.95th=[ 113], 00:27:33.964 | 99.99th=[ 113] 00:27:33.964 bw ( KiB/s): min= 896, max= 3584, per=3.48%, avg=2248.68, stdev=809.84, samples=19 00:27:33.964 iops : min= 224, max= 896, avg=562.11, stdev=202.49, samples=19 00:27:33.964 lat (msec) : 10=0.93%, 20=41.03%, 50=49.87%, 100=8.08%, 250=0.09% 00:27:33.964 cpu : usr=59.89%, sys=1.21%, ctx=1204, majf=0, minf=0 00:27:33.964 IO depths : 1=3.1%, 2=6.9%, 4=16.9%, 8=63.3%, 16=9.8%, 32=0.0%, >=64=0.0% 00:27:33.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.964 complete : 0=0.0%, 4=92.0%, 8=2.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.964 issued rwts: total=5781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.964 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:33.964 filename1: (groupid=0, jobs=1): err= 0: pid=108861: Tue Nov 19 17:26:25 2024 00:27:33.964 read: IOPS=582, BW=2329KiB/s (2385kB/s)(22.8MiB/10012msec) 00:27:33.964 slat (usec): min=5, max=5069, avg=21.95, stdev=167.91 00:27:33.964 clat (msec): min=7, max=118, avg=27.33, stdev=13.23 00:27:33.964 lat (msec): min=7, max=118, avg=27.35, stdev=13.23 00:27:33.964 clat percentiles (msec): 00:27:33.964 | 1.00th=[ 11], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 17], 00:27:33.964 | 30.00th=[ 19], 40.00th=[ 22], 50.00th=[ 24], 60.00th=[ 25], 00:27:33.964 | 70.00th=[ 32], 80.00th=[ 39], 90.00th=[ 46], 95.00th=[ 54], 00:27:33.964 | 99.00th=[ 68], 99.50th=[ 92], 99.90th=[ 95], 99.95th=[ 120], 00:27:33.964 | 99.99th=[ 120] 00:27:33.964 bw ( KiB/s): min= 872, max= 3560, per=3.61%, avg=2334.05, stdev=748.40, samples=19 00:27:33.964 iops : min= 218, max= 890, avg=583.42, stdev=187.15, samples=19 00:27:33.964 lat (msec) : 10=0.77%, 20=34.15%, 50=59.18%, 100=5.81%, 250=0.09% 00:27:33.964 cpu : usr=48.11%, sys=0.89%, ctx=1326, majf=0, minf=0 00:27:33.964 IO depths : 1=2.3%, 2=5.2%, 4=14.2%, 8=67.6%, 16=10.6%, 32=0.0%, >=64=0.0% 00:27:33.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.964 complete : 0=0.0%, 4=91.3%, 8=3.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.964 issued rwts: total=5830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.964 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:33.964 filename1: (groupid=0, jobs=1): err= 0: pid=108862: Tue Nov 19 17:26:25 2024 00:27:33.964 read: IOPS=560, BW=2241KiB/s (2294kB/s)(21.9MiB/10004msec) 00:27:33.964 slat (usec): min=5, max=4061, avg=20.74, stdev=150.28 00:27:33.964 clat (msec): min=6, max=117, avg=28.40, stdev=14.17 00:27:33.964 lat (msec): min=6, max=117, avg=28.42, stdev=14.17 00:27:33.964 clat percentiles (msec): 00:27:33.964 | 1.00th=[ 12], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 18], 00:27:33.964 | 30.00th=[ 21], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 26], 00:27:33.964 | 70.00th=[ 31], 80.00th=[ 39], 90.00th=[ 48], 95.00th=[ 56], 00:27:33.964 | 99.00th=[ 85], 99.50th=[ 90], 99.90th=[ 105], 99.95th=[ 118], 00:27:33.964 | 99.99th=[ 118] 00:27:33.964 bw ( KiB/s): min= 944, max= 3480, per=3.46%, avg=2235.89, stdev=702.73, samples=19 00:27:33.964 iops : min= 236, max= 870, avg=558.89, stdev=175.76, samples=19 00:27:33.964 lat (msec) : 10=0.39%, 20=28.05%, 50=63.94%, 100=7.46%, 250=0.16% 00:27:33.964 cpu : usr=47.05%, sys=0.86%, ctx=1371, majf=0, minf=0 00:27:33.964 IO depths : 1=2.2%, 2=5.1%, 4=14.0%, 8=67.9%, 16=10.8%, 32=0.0%, >=64=0.0% 00:27:33.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.964 complete : 0=0.0%, 4=91.2%, 8=3.7%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.964 issued rwts: total=5604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.964 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:33.964 filename1: (groupid=0, jobs=1): err= 0: pid=108863: Tue Nov 19 17:26:25 2024 00:27:33.964 read: IOPS=985, BW=3940KiB/s (4035kB/s)(38.5MiB/10018msec) 00:27:33.964 slat (usec): min=3, max=9020, avg=19.44, stdev=247.34 00:27:33.964 clat (usec): min=704, max=48042, avg=16099.59, stdev=8156.19 00:27:33.965 lat (usec): min=711, max=48057, avg=16119.03, stdev=8155.01 00:27:33.965 clat percentiles (usec): 00:27:33.965 | 1.00th=[ 1221], 5.00th=[ 1729], 10.00th=[ 4293], 20.00th=[ 9372], 00:27:33.965 | 30.00th=[11731], 40.00th=[13829], 50.00th=[15664], 60.00th=[17957], 00:27:33.965 | 70.00th=[21365], 80.00th=[23462], 90.00th=[24511], 95.00th=[30016], 00:27:33.965 | 99.00th=[36963], 99.50th=[41157], 99.90th=[47973], 99.95th=[47973], 00:27:33.965 | 99.99th=[47973] 00:27:33.965 bw ( KiB/s): min= 2240, max=10104, per=6.10%, avg=3940.20, stdev=1752.44, samples=20 00:27:33.965 iops : min= 560, max= 2526, avg=985.05, stdev=438.11, samples=20 00:27:33.965 lat (usec) : 750=0.03%, 1000=0.11% 00:27:33.965 lat (msec) : 2=5.49%, 4=4.32%, 10=12.09%, 20=42.03%, 50=35.92% 00:27:33.965 cpu : usr=38.52%, sys=1.12%, ctx=1181, majf=0, minf=0 00:27:33.965 IO depths : 1=1.4%, 2=3.1%, 4=11.2%, 8=72.9%, 16=11.5%, 32=0.0%, >=64=0.0% 00:27:33.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.965 complete : 0=0.0%, 4=90.3%, 8=4.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.965 issued rwts: total=9868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.965 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:33.965 filename1: (groupid=0, jobs=1): err= 0: pid=108864: Tue Nov 19 17:26:25 2024 00:27:33.965 read: IOPS=596, BW=2387KiB/s (2444kB/s)(23.3MiB/10016msec) 00:27:33.965 slat (usec): min=4, max=4070, avg=24.58, stdev=206.20 00:27:33.965 clat (msec): min=7, max=114, avg=26.64, stdev=14.40 00:27:33.965 lat (msec): min=7, max=114, avg=26.67, stdev=14.40 00:27:33.965 clat percentiles (msec): 00:27:33.965 | 1.00th=[ 11], 5.00th=[ 14], 10.00th=[ 16], 20.00th=[ 17], 00:27:33.965 | 30.00th=[ 18], 40.00th=[ 20], 50.00th=[ 23], 60.00th=[ 24], 00:27:33.965 | 70.00th=[ 29], 80.00th=[ 35], 90.00th=[ 48], 95.00th=[ 56], 00:27:33.965 | 99.00th=[ 79], 99.50th=[ 91], 99.90th=[ 94], 99.95th=[ 114], 00:27:33.965 | 99.99th=[ 114] 00:27:33.965 bw ( KiB/s): min= 808, max= 3408, per=3.69%, avg=2383.79, stdev=746.76, samples=19 00:27:33.965 iops : min= 202, max= 852, avg=595.89, stdev=186.70, samples=19 00:27:33.965 lat (msec) : 10=0.59%, 20=41.78%, 50=50.38%, 100=7.18%, 250=0.08% 00:27:33.965 cpu : usr=48.21%, sys=0.95%, ctx=1339, majf=0, minf=0 00:27:33.965 IO depths : 1=1.5%, 2=3.5%, 4=12.2%, 8=71.2%, 16=11.5%, 32=0.0%, >=64=0.0% 00:27:33.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.965 complete : 0=0.0%, 4=90.7%, 8=4.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.965 issued rwts: total=5977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.965 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:33.965 filename2: (groupid=0, jobs=1): err= 0: pid=108865: Tue Nov 19 17:26:25 2024 00:27:33.965 read: IOPS=652, BW=2611KiB/s (2674kB/s)(25.6MiB/10031msec) 00:27:33.965 slat (usec): min=3, max=10992, avg=20.51, stdev=245.56 00:27:33.965 clat (usec): min=1844, max=93957, avg=24322.88, stdev=11284.32 00:27:33.965 lat (usec): min=1851, max=93978, avg=24343.39, stdev=11281.55 00:27:33.965 clat percentiles (usec): 00:27:33.965 | 1.00th=[ 2671], 5.00th=[10945], 10.00th=[11994], 20.00th=[15795], 00:27:33.965 | 30.00th=[20579], 40.00th=[23200], 50.00th=[23725], 60.00th=[23987], 00:27:33.965 | 70.00th=[24773], 80.00th=[30016], 90.00th=[35914], 95.00th=[45876], 00:27:33.965 | 99.00th=[69731], 99.50th=[85459], 99.90th=[93848], 99.95th=[93848], 00:27:33.965 | 99.99th=[93848] 00:27:33.965 bw ( KiB/s): min= 1120, max= 4766, per=4.00%, avg=2586.42, stdev=733.43, samples=19 00:27:33.965 iops : min= 280, max= 1191, avg=646.58, stdev=183.27, samples=19 00:27:33.965 lat (msec) : 2=0.27%, 4=1.76%, 10=2.17%, 20=23.70%, 50=69.95% 00:27:33.965 lat (msec) : 100=2.15% 00:27:33.965 cpu : usr=40.25%, sys=0.89%, ctx=1065, majf=0, minf=9 00:27:33.965 IO depths : 1=1.1%, 2=2.6%, 4=8.8%, 8=75.5%, 16=11.9%, 32=0.0%, >=64=0.0% 00:27:33.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.965 complete : 0=0.0%, 4=90.0%, 8=5.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.965 issued rwts: total=6548,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.965 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:33.965 filename2: (groupid=0, jobs=1): err= 0: pid=108866: Tue Nov 19 17:26:25 2024 00:27:33.965 read: IOPS=572, BW=2289KiB/s (2344kB/s)(22.4MiB/10012msec) 00:27:33.965 slat (usec): min=6, max=4039, avg=20.83, stdev=147.80 00:27:33.965 clat (msec): min=6, max=117, avg=27.80, stdev=14.45 00:27:33.965 lat (msec): min=6, max=117, avg=27.83, stdev=14.45 00:27:33.965 clat percentiles (msec): 00:27:33.965 | 1.00th=[ 12], 5.00th=[ 14], 10.00th=[ 16], 20.00th=[ 17], 00:27:33.965 | 30.00th=[ 19], 40.00th=[ 22], 50.00th=[ 24], 60.00th=[ 25], 00:27:33.965 | 70.00th=[ 32], 80.00th=[ 40], 90.00th=[ 47], 95.00th=[ 56], 00:27:33.965 | 99.00th=[ 83], 99.50th=[ 90], 99.90th=[ 105], 99.95th=[ 118], 00:27:33.965 | 99.99th=[ 118] 00:27:33.965 bw ( KiB/s): min= 944, max= 3520, per=3.50%, avg=2262.37, stdev=774.27, samples=19 00:27:33.965 iops : min= 236, max= 880, avg=565.53, stdev=193.63, samples=19 00:27:33.965 lat (msec) : 10=0.68%, 20=36.34%, 50=56.14%, 100=6.67%, 250=0.17% 00:27:33.965 cpu : usr=48.10%, sys=0.94%, ctx=1336, majf=0, minf=0 00:27:33.965 IO depths : 1=1.8%, 2=4.3%, 4=12.9%, 8=69.6%, 16=11.3%, 32=0.0%, >=64=0.0% 00:27:33.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.965 complete : 0=0.0%, 4=90.9%, 8=4.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.965 issued rwts: total=5730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.965 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:33.965 filename2: (groupid=0, jobs=1): err= 0: pid=108867: Tue Nov 19 17:26:25 2024 00:27:33.965 read: IOPS=790, BW=3164KiB/s (3240kB/s)(31.1MiB/10075msec) 00:27:33.965 slat (usec): min=3, max=12032, avg=17.70, stdev=205.83 00:27:33.965 clat (usec): min=1275, max=95983, avg=20096.01, stdev=12517.07 00:27:33.965 lat (usec): min=1282, max=96002, avg=20113.72, stdev=12517.83 00:27:33.965 clat percentiles (usec): 00:27:33.965 | 1.00th=[ 1975], 5.00th=[ 5932], 10.00th=[ 7963], 20.00th=[11731], 00:27:33.965 | 30.00th=[14615], 40.00th=[15926], 50.00th=[17433], 60.00th=[21890], 00:27:33.965 | 70.00th=[23987], 80.00th=[23987], 90.00th=[31851], 95.00th=[39060], 00:27:33.965 | 99.00th=[87557], 99.50th=[94897], 99.90th=[95945], 99.95th=[95945], 00:27:33.965 | 99.99th=[95945] 00:27:33.965 bw ( KiB/s): min= 688, max= 6632, per=4.92%, avg=3178.75, stdev=1243.35, samples=20 00:27:33.965 iops : min= 172, max= 1658, avg=794.65, stdev=310.78, samples=20 00:27:33.965 lat (msec) : 2=1.05%, 4=2.66%, 10=11.67%, 20=40.59%, 50=41.50% 00:27:33.965 lat (msec) : 100=2.52% 00:27:33.965 cpu : usr=41.88%, sys=0.93%, ctx=1142, majf=0, minf=9 00:27:33.965 IO depths : 1=0.6%, 2=1.4%, 4=7.8%, 8=77.7%, 16=12.5%, 32=0.0%, >=64=0.0% 00:27:33.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.965 complete : 0=0.0%, 4=89.6%, 8=5.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.965 issued rwts: total=7969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.965 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:33.965 filename2: (groupid=0, jobs=1): err= 0: pid=108868: Tue Nov 19 17:26:25 2024 00:27:33.965 read: IOPS=943, BW=3774KiB/s (3865kB/s)(36.9MiB/10021msec) 00:27:33.965 slat (usec): min=3, max=8036, avg=17.36, stdev=200.74 00:27:33.965 clat (usec): min=1090, max=59113, avg=16831.55, stdev=8295.99 00:27:33.965 lat (usec): min=1097, max=59134, avg=16848.91, stdev=8298.09 00:27:33.965 clat percentiles (usec): 00:27:33.965 | 1.00th=[ 1450], 5.00th=[ 3195], 10.00th=[ 7308], 20.00th=[ 9241], 00:27:33.965 | 30.00th=[11863], 40.00th=[14222], 50.00th=[15926], 60.00th=[18744], 00:27:33.965 | 70.00th=[22152], 80.00th=[23987], 90.00th=[26084], 95.00th=[30278], 00:27:33.965 | 99.00th=[38011], 99.50th=[45876], 99.90th=[47973], 99.95th=[58983], 00:27:33.965 | 99.99th=[58983] 00:27:33.965 bw ( KiB/s): min= 2312, max= 8240, per=5.84%, avg=3777.35, stdev=1512.82, samples=20 00:27:33.965 iops : min= 578, max= 2060, avg=944.30, stdev=378.19, samples=20 00:27:33.965 lat (msec) : 2=1.98%, 4=4.05%, 10=17.43%, 20=39.68%, 50=36.80% 00:27:33.965 lat (msec) : 100=0.06% 00:27:33.965 cpu : usr=41.22%, sys=1.09%, ctx=1343, majf=0, minf=9 00:27:33.965 IO depths : 1=1.3%, 2=2.7%, 4=10.4%, 8=73.9%, 16=11.7%, 32=0.0%, >=64=0.0% 00:27:33.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.965 complete : 0=0.0%, 4=90.2%, 8=4.7%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.965 issued rwts: total=9455,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.965 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:33.965 filename2: (groupid=0, jobs=1): err= 0: pid=108869: Tue Nov 19 17:26:25 2024 00:27:33.965 read: IOPS=585, BW=2344KiB/s (2400kB/s)(22.9MiB/10017msec) 00:27:33.965 slat (usec): min=3, max=4041, avg=21.16, stdev=176.11 00:27:33.965 clat (usec): min=7490, max=90808, avg=27141.93, stdev=14414.84 00:27:33.965 lat (usec): min=7497, max=90830, avg=27163.09, stdev=14419.05 00:27:33.965 clat percentiles (usec): 00:27:33.965 | 1.00th=[ 8979], 5.00th=[12518], 10.00th=[14222], 20.00th=[15664], 00:27:33.965 | 30.00th=[17433], 40.00th=[20579], 50.00th=[22152], 60.00th=[23987], 00:27:33.965 | 70.00th=[30016], 80.00th=[39060], 90.00th=[47973], 95.00th=[56886], 00:27:33.965 | 99.00th=[72877], 99.50th=[74974], 99.90th=[90702], 99.95th=[90702], 00:27:33.965 | 99.99th=[90702] 00:27:33.965 bw ( KiB/s): min= 896, max= 3552, per=3.62%, avg=2338.05, stdev=814.58, samples=19 00:27:33.965 iops : min= 224, max= 888, avg=584.47, stdev=203.63, samples=19 00:27:33.965 lat (msec) : 10=1.82%, 20=37.01%, 50=52.72%, 100=8.45% 00:27:33.965 cpu : usr=48.42%, sys=0.93%, ctx=1536, majf=0, minf=0 00:27:33.965 IO depths : 1=2.3%, 2=5.3%, 4=14.9%, 8=66.9%, 16=10.5%, 32=0.0%, >=64=0.0% 00:27:33.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.965 complete : 0=0.0%, 4=91.4%, 8=3.3%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.965 issued rwts: total=5869,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.965 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:33.965 filename2: (groupid=0, jobs=1): err= 0: pid=108870: Tue Nov 19 17:26:25 2024 00:27:33.965 read: IOPS=608, BW=2435KiB/s (2493kB/s)(23.8MiB/10007msec) 00:27:33.965 slat (usec): min=6, max=4046, avg=16.83, stdev=135.16 00:27:33.965 clat (msec): min=6, max=112, avg=26.18, stdev=13.92 00:27:33.965 lat (msec): min=6, max=112, avg=26.20, stdev=13.92 00:27:33.965 clat percentiles (msec): 00:27:33.965 | 1.00th=[ 10], 5.00th=[ 13], 10.00th=[ 15], 20.00th=[ 16], 00:27:33.965 | 30.00th=[ 18], 40.00th=[ 21], 50.00th=[ 23], 60.00th=[ 24], 00:27:33.965 | 70.00th=[ 30], 80.00th=[ 34], 90.00th=[ 44], 95.00th=[ 55], 00:27:33.965 | 99.00th=[ 75], 99.50th=[ 83], 99.90th=[ 112], 99.95th=[ 112], 00:27:33.965 | 99.99th=[ 112] 00:27:33.965 bw ( KiB/s): min= 816, max= 3776, per=3.74%, avg=2420.26, stdev=833.00, samples=19 00:27:33.965 iops : min= 204, max= 944, avg=605.00, stdev=208.28, samples=19 00:27:33.966 lat (msec) : 10=1.25%, 20=36.92%, 50=55.29%, 100=6.39%, 250=0.15% 00:27:33.966 cpu : usr=48.26%, sys=1.04%, ctx=1520, majf=0, minf=0 00:27:33.966 IO depths : 1=1.5%, 2=3.5%, 4=10.8%, 8=72.7%, 16=11.5%, 32=0.0%, >=64=0.0% 00:27:33.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.966 complete : 0=0.0%, 4=90.5%, 8=4.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.966 issued rwts: total=6091,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.966 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:33.966 filename2: (groupid=0, jobs=1): err= 0: pid=108871: Tue Nov 19 17:26:25 2024 00:27:33.966 read: IOPS=948, BW=3796KiB/s (3887kB/s)(37.1MiB/10011msec) 00:27:33.966 slat (usec): min=3, max=11019, avg=19.76, stdev=274.00 00:27:33.966 clat (usec): min=815, max=52969, avg=16728.65, stdev=8296.17 00:27:33.966 lat (usec): min=822, max=52988, avg=16748.41, stdev=8295.65 00:27:33.966 clat percentiles (usec): 00:27:33.966 | 1.00th=[ 1270], 5.00th=[ 2057], 10.00th=[ 5669], 20.00th=[ 9896], 00:27:33.966 | 30.00th=[11863], 40.00th=[14091], 50.00th=[16057], 60.00th=[19530], 00:27:33.966 | 70.00th=[22152], 80.00th=[23725], 90.00th=[25560], 95.00th=[31065], 00:27:33.966 | 99.00th=[36963], 99.50th=[38536], 99.90th=[47973], 99.95th=[48497], 00:27:33.966 | 99.99th=[53216] 00:27:33.966 bw ( KiB/s): min= 2512, max= 9456, per=6.00%, avg=3881.95, stdev=1621.01, samples=19 00:27:33.966 iops : min= 628, max= 2364, avg=970.47, stdev=405.24, samples=19 00:27:33.966 lat (usec) : 1000=0.06% 00:27:33.966 lat (msec) : 2=4.68%, 4=4.46%, 10=11.11%, 20=40.84%, 50=38.80% 00:27:33.966 lat (msec) : 100=0.04% 00:27:33.966 cpu : usr=38.66%, sys=1.05%, ctx=1185, majf=0, minf=9 00:27:33.966 IO depths : 1=1.6%, 2=3.4%, 4=11.4%, 8=72.1%, 16=11.4%, 32=0.0%, >=64=0.0% 00:27:33.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.966 complete : 0=0.0%, 4=90.4%, 8=4.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.966 issued rwts: total=9500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.966 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:33.966 filename2: (groupid=0, jobs=1): err= 0: pid=108872: Tue Nov 19 17:26:25 2024 00:27:33.966 read: IOPS=776, BW=3108KiB/s (3182kB/s)(30.5MiB/10038msec) 00:27:33.966 slat (usec): min=3, max=8033, avg=15.24, stdev=156.11 00:27:33.966 clat (usec): min=824, max=84042, avg=20474.45, stdev=11460.46 00:27:33.966 lat (usec): min=832, max=84063, avg=20489.69, stdev=11462.50 00:27:33.966 clat percentiles (usec): 00:27:33.966 | 1.00th=[ 1450], 5.00th=[ 2212], 10.00th=[ 5604], 20.00th=[ 9896], 00:27:33.966 | 30.00th=[14353], 40.00th=[19530], 50.00th=[22938], 60.00th=[23725], 00:27:33.966 | 70.00th=[23987], 80.00th=[25297], 90.00th=[33817], 95.00th=[38536], 00:27:33.966 | 99.00th=[62129], 99.50th=[63701], 99.90th=[78119], 99.95th=[84411], 00:27:33.966 | 99.99th=[84411] 00:27:33.966 bw ( KiB/s): min= 1200, max= 9848, per=4.82%, avg=3113.20, stdev=1760.79, samples=20 00:27:33.966 iops : min= 300, max= 2462, avg=778.30, stdev=440.20, samples=20 00:27:33.966 lat (usec) : 1000=0.13% 00:27:33.966 lat (msec) : 2=4.55%, 4=3.09%, 10=12.28%, 20=20.40%, 50=57.75% 00:27:33.966 lat (msec) : 100=1.80% 00:27:33.966 cpu : usr=38.38%, sys=1.08%, ctx=1074, majf=0, minf=9 00:27:33.966 IO depths : 1=1.4%, 2=3.2%, 4=10.5%, 8=73.3%, 16=11.5%, 32=0.0%, >=64=0.0% 00:27:33.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.966 complete : 0=0.0%, 4=90.2%, 8=4.7%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.966 issued rwts: total=7799,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.966 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:33.966 00:27:33.966 Run status group 0 (all jobs): 00:27:33.966 READ: bw=63.1MiB/s (66.2MB/s), 2166KiB/s-3940KiB/s (2218kB/s-4035kB/s), io=636MiB (667MB), run=10001-10075msec 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:33.966 bdev_null0 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:33.966 [2024-11-19 17:26:26.083678] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:33.966 bdev_null1 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:33.966 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:33.967 { 00:27:33.967 "params": { 00:27:33.967 "name": "Nvme$subsystem", 00:27:33.967 "trtype": "$TEST_TRANSPORT", 00:27:33.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.967 "adrfam": "ipv4", 00:27:33.967 "trsvcid": "$NVMF_PORT", 00:27:33.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.967 "hdgst": ${hdgst:-false}, 00:27:33.967 "ddgst": ${ddgst:-false} 00:27:33.967 }, 00:27:33.967 "method": "bdev_nvme_attach_controller" 00:27:33.967 } 00:27:33.967 EOF 00:27:33.967 )") 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:33.967 { 00:27:33.967 "params": { 00:27:33.967 "name": "Nvme$subsystem", 00:27:33.967 "trtype": "$TEST_TRANSPORT", 00:27:33.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.967 "adrfam": "ipv4", 00:27:33.967 "trsvcid": "$NVMF_PORT", 00:27:33.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.967 "hdgst": ${hdgst:-false}, 00:27:33.967 "ddgst": ${ddgst:-false} 00:27:33.967 }, 00:27:33.967 "method": "bdev_nvme_attach_controller" 00:27:33.967 } 00:27:33.967 EOF 00:27:33.967 )") 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:33.967 "params": { 00:27:33.967 "name": "Nvme0", 00:27:33.967 "trtype": "tcp", 00:27:33.967 "traddr": "10.0.0.3", 00:27:33.967 "adrfam": "ipv4", 00:27:33.967 "trsvcid": "4420", 00:27:33.967 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:33.967 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:33.967 "hdgst": false, 00:27:33.967 "ddgst": false 00:27:33.967 }, 00:27:33.967 "method": "bdev_nvme_attach_controller" 00:27:33.967 },{ 00:27:33.967 "params": { 00:27:33.967 "name": "Nvme1", 00:27:33.967 "trtype": "tcp", 00:27:33.967 "traddr": "10.0.0.3", 00:27:33.967 "adrfam": "ipv4", 00:27:33.967 "trsvcid": "4420", 00:27:33.967 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:33.967 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:33.967 "hdgst": false, 00:27:33.967 "ddgst": false 00:27:33.967 }, 00:27:33.967 "method": "bdev_nvme_attach_controller" 00:27:33.967 }' 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:33.967 17:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:33.967 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:33.967 ... 00:27:33.967 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:33.967 ... 00:27:33.967 fio-3.35 00:27:33.967 Starting 4 threads 00:27:40.567 00:27:40.567 filename0: (groupid=0, jobs=1): err= 0: pid=109100: Tue Nov 19 17:26:32 2024 00:27:40.567 read: IOPS=2243, BW=17.5MiB/s (18.4MB/s)(87.7MiB/5003msec) 00:27:40.567 slat (nsec): min=6233, max=68217, avg=8199.14, stdev=4489.01 00:27:40.567 clat (usec): min=962, max=4924, avg=3520.94, stdev=199.29 00:27:40.567 lat (usec): min=968, max=4931, avg=3529.14, stdev=199.41 00:27:40.567 clat percentiles (usec): 00:27:40.567 | 1.00th=[ 3326], 5.00th=[ 3425], 10.00th=[ 3458], 20.00th=[ 3458], 00:27:40.567 | 30.00th=[ 3490], 40.00th=[ 3490], 50.00th=[ 3523], 60.00th=[ 3523], 00:27:40.567 | 70.00th=[ 3556], 80.00th=[ 3589], 90.00th=[ 3654], 95.00th=[ 3752], 00:27:40.567 | 99.00th=[ 3916], 99.50th=[ 4015], 99.90th=[ 4424], 99.95th=[ 4752], 00:27:40.567 | 99.99th=[ 4817] 00:27:40.567 bw ( KiB/s): min=17664, max=18560, per=25.10%, avg=17962.67, stdev=256.00, samples=9 00:27:40.567 iops : min= 2208, max= 2320, avg=2245.33, stdev=32.00, samples=9 00:27:40.567 lat (usec) : 1000=0.06% 00:27:40.567 lat (msec) : 2=0.37%, 4=99.00%, 10=0.57% 00:27:40.567 cpu : usr=95.66%, sys=3.26%, ctx=8, majf=0, minf=0 00:27:40.567 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:40.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.567 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.567 issued rwts: total=11224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:40.567 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:40.567 filename0: (groupid=0, jobs=1): err= 0: pid=109101: Tue Nov 19 17:26:32 2024 00:27:40.567 read: IOPS=2231, BW=17.4MiB/s (18.3MB/s)(87.2MiB/5002msec) 00:27:40.567 slat (nsec): min=3531, max=59182, avg=8913.08, stdev=5404.99 00:27:40.567 clat (usec): min=665, max=7693, avg=3535.56, stdev=204.77 00:27:40.567 lat (usec): min=677, max=7720, avg=3544.48, stdev=205.43 00:27:40.567 clat percentiles (usec): 00:27:40.567 | 1.00th=[ 3392], 5.00th=[ 3425], 10.00th=[ 3425], 20.00th=[ 3458], 00:27:40.567 | 30.00th=[ 3490], 40.00th=[ 3490], 50.00th=[ 3523], 60.00th=[ 3523], 00:27:40.567 | 70.00th=[ 3556], 80.00th=[ 3589], 90.00th=[ 3654], 95.00th=[ 3752], 00:27:40.567 | 99.00th=[ 3916], 99.50th=[ 4293], 99.90th=[ 6849], 99.95th=[ 6980], 00:27:40.567 | 99.99th=[ 7701] 00:27:40.567 bw ( KiB/s): min=17536, max=18048, per=24.94%, avg=17848.89, stdev=158.21, samples=9 00:27:40.567 iops : min= 2192, max= 2256, avg=2231.11, stdev=19.78, samples=9 00:27:40.567 lat (usec) : 750=0.01% 00:27:40.567 lat (msec) : 2=0.04%, 4=99.23%, 10=0.73% 00:27:40.567 cpu : usr=94.74%, sys=3.52%, ctx=15, majf=0, minf=0 00:27:40.567 IO depths : 1=12.2%, 2=25.0%, 4=50.0%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:40.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.567 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.567 issued rwts: total=11160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:40.567 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:40.567 filename1: (groupid=0, jobs=1): err= 0: pid=109102: Tue Nov 19 17:26:32 2024 00:27:40.567 read: IOPS=2236, BW=17.5MiB/s (18.3MB/s)(87.4MiB/5001msec) 00:27:40.567 slat (nsec): min=4858, max=71639, avg=12458.91, stdev=5920.30 00:27:40.567 clat (usec): min=2569, max=5066, avg=3516.46, stdev=132.81 00:27:40.567 lat (usec): min=2582, max=5090, avg=3528.92, stdev=132.81 00:27:40.567 clat percentiles (usec): 00:27:40.567 | 1.00th=[ 3326], 5.00th=[ 3392], 10.00th=[ 3425], 20.00th=[ 3458], 00:27:40.567 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3490], 60.00th=[ 3523], 00:27:40.567 | 70.00th=[ 3523], 80.00th=[ 3556], 90.00th=[ 3621], 95.00th=[ 3720], 00:27:40.567 | 99.00th=[ 3884], 99.50th=[ 4178], 99.90th=[ 4686], 99.95th=[ 5014], 00:27:40.567 | 99.99th=[ 5080] 00:27:40.567 bw ( KiB/s): min=17664, max=18048, per=25.01%, avg=17895.44, stdev=125.93, samples=9 00:27:40.567 iops : min= 2208, max= 2256, avg=2236.89, stdev=15.72, samples=9 00:27:40.567 lat (msec) : 4=99.38%, 10=0.62% 00:27:40.567 cpu : usr=95.16%, sys=3.58%, ctx=9, majf=0, minf=0 00:27:40.567 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:40.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.567 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.567 issued rwts: total=11184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:40.567 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:40.567 filename1: (groupid=0, jobs=1): err= 0: pid=109103: Tue Nov 19 17:26:32 2024 00:27:40.567 read: IOPS=2236, BW=17.5MiB/s (18.3MB/s)(87.4MiB/5001msec) 00:27:40.567 slat (nsec): min=3519, max=69888, avg=12663.04, stdev=5655.63 00:27:40.567 clat (usec): min=2123, max=5363, avg=3520.63, stdev=192.26 00:27:40.568 lat (usec): min=2144, max=5369, avg=3533.29, stdev=192.06 00:27:40.568 clat percentiles (usec): 00:27:40.568 | 1.00th=[ 2737], 5.00th=[ 3392], 10.00th=[ 3425], 20.00th=[ 3458], 00:27:40.568 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3490], 60.00th=[ 3523], 00:27:40.568 | 70.00th=[ 3523], 80.00th=[ 3556], 90.00th=[ 3654], 95.00th=[ 3752], 00:27:40.568 | 99.00th=[ 4359], 99.50th=[ 4424], 99.90th=[ 4817], 99.95th=[ 5145], 00:27:40.568 | 99.99th=[ 5276] 00:27:40.568 bw ( KiB/s): min=17680, max=18048, per=25.01%, avg=17895.44, stdev=121.75, samples=9 00:27:40.568 iops : min= 2210, max= 2256, avg=2236.89, stdev=15.20, samples=9 00:27:40.568 lat (msec) : 4=98.01%, 10=1.99% 00:27:40.568 cpu : usr=94.86%, sys=3.86%, ctx=6, majf=0, minf=0 00:27:40.568 IO depths : 1=4.4%, 2=25.0%, 4=50.0%, 8=20.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:40.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.568 complete : 0=0.0%, 4=89.6%, 8=10.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.568 issued rwts: total=11184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:40.568 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:40.568 00:27:40.568 Run status group 0 (all jobs): 00:27:40.568 READ: bw=69.9MiB/s (73.3MB/s), 17.4MiB/s-17.5MiB/s (18.3MB/s-18.4MB/s), io=350MiB (367MB), run=5001-5003msec 00:27:40.568 17:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:27:40.568 17:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:40.568 17:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:40.568 17:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:40.568 17:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:40.568 17:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:40.568 17:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.568 17:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:40.568 17:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.568 17:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:40.568 17:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.568 17:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:40.568 17:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.568 17:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:40.568 17:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:40.568 17:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:40.568 17:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:40.568 17:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.568 17:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:40.568 17:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.568 17:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:40.568 17:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.568 17:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:40.568 17:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.568 00:27:40.568 real 0m34.665s 00:27:40.568 user 3m43.901s 00:27:40.568 sys 0m4.735s 00:27:40.568 17:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:40.568 ************************************ 00:27:40.568 END TEST fio_dif_rand_params 00:27:40.568 17:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:40.568 ************************************ 00:27:40.568 17:26:32 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:27:40.568 17:26:32 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:40.568 17:26:32 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:40.568 17:26:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:40.568 ************************************ 00:27:40.568 START TEST fio_dif_digest 00:27:40.568 ************************************ 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:40.568 bdev_null0 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:40.568 [2024-11-19 17:26:32.357359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:40.568 { 00:27:40.568 "params": { 00:27:40.568 "name": "Nvme$subsystem", 00:27:40.568 "trtype": "$TEST_TRANSPORT", 00:27:40.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.568 "adrfam": "ipv4", 00:27:40.568 "trsvcid": "$NVMF_PORT", 00:27:40.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.568 "hdgst": ${hdgst:-false}, 00:27:40.568 "ddgst": ${ddgst:-false} 00:27:40.568 }, 00:27:40.568 "method": "bdev_nvme_attach_controller" 00:27:40.568 } 00:27:40.568 EOF 00:27:40.568 )") 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:27:40.568 17:26:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:40.568 "params": { 00:27:40.568 "name": "Nvme0", 00:27:40.568 "trtype": "tcp", 00:27:40.568 "traddr": "10.0.0.3", 00:27:40.568 "adrfam": "ipv4", 00:27:40.569 "trsvcid": "4420", 00:27:40.569 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:40.569 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:40.569 "hdgst": true, 00:27:40.569 "ddgst": true 00:27:40.569 }, 00:27:40.569 "method": "bdev_nvme_attach_controller" 00:27:40.569 }' 00:27:40.569 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:40.569 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:40.569 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:40.569 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:40.569 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:40.569 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:40.569 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:40.569 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:40.569 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:40.569 17:26:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:40.569 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:40.569 ... 00:27:40.569 fio-3.35 00:27:40.569 Starting 3 threads 00:27:52.776 00:27:52.777 filename0: (groupid=0, jobs=1): err= 0: pid=109209: Tue Nov 19 17:26:43 2024 00:27:52.777 read: IOPS=254, BW=31.8MiB/s (33.3MB/s)(318MiB/10006msec) 00:27:52.777 slat (nsec): min=6215, max=72837, avg=13848.80, stdev=6519.88 00:27:52.777 clat (usec): min=5785, max=52523, avg=11785.48, stdev=1907.94 00:27:52.777 lat (usec): min=5792, max=52532, avg=11799.33, stdev=1908.46 00:27:52.777 clat percentiles (usec): 00:27:52.777 | 1.00th=[ 6915], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[11076], 00:27:52.777 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:27:52.777 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13173], 95.00th=[13566], 00:27:52.777 | 99.00th=[14353], 99.50th=[14746], 99.90th=[48497], 99.95th=[51643], 00:27:52.777 | 99.99th=[52691] 00:27:52.777 bw ( KiB/s): min=30208, max=35840, per=33.01%, avg=32431.16, stdev=1349.38, samples=19 00:27:52.777 iops : min= 236, max= 280, avg=253.37, stdev=10.54, samples=19 00:27:52.777 lat (msec) : 10=5.51%, 20=94.38%, 50=0.04%, 100=0.08% 00:27:52.777 cpu : usr=93.81%, sys=4.58%, ctx=11, majf=0, minf=0 00:27:52.777 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:52.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.777 issued rwts: total=2543,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.777 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:52.777 filename0: (groupid=0, jobs=1): err= 0: pid=109210: Tue Nov 19 17:26:43 2024 00:27:52.777 read: IOPS=287, BW=36.0MiB/s (37.7MB/s)(360MiB/10008msec) 00:27:52.777 slat (nsec): min=6480, max=67801, avg=17576.96, stdev=6560.39 00:27:52.777 clat (usec): min=6424, max=51533, avg=10397.49, stdev=3705.45 00:27:52.777 lat (usec): min=6436, max=51545, avg=10415.07, stdev=3705.43 00:27:52.777 clat percentiles (usec): 00:27:52.777 | 1.00th=[ 8160], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9503], 00:27:52.777 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:27:52.777 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11076], 95.00th=[11469], 00:27:52.777 | 99.00th=[12649], 99.50th=[50070], 99.90th=[51119], 99.95th=[51119], 00:27:52.777 | 99.99th=[51643] 00:27:52.777 bw ( KiB/s): min=31744, max=39424, per=37.78%, avg=37120.00, stdev=1995.78, samples=19 00:27:52.777 iops : min= 248, max= 308, avg=290.00, stdev=15.59, samples=19 00:27:52.777 lat (msec) : 10=46.44%, 20=52.72%, 50=0.38%, 100=0.45% 00:27:52.777 cpu : usr=93.23%, sys=4.81%, ctx=18, majf=0, minf=0 00:27:52.777 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:52.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.777 issued rwts: total=2881,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.777 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:52.777 filename0: (groupid=0, jobs=1): err= 0: pid=109211: Tue Nov 19 17:26:43 2024 00:27:52.777 read: IOPS=225, BW=28.2MiB/s (29.6MB/s)(282MiB/10004msec) 00:27:52.777 slat (nsec): min=6209, max=73496, avg=14799.11, stdev=6181.87 00:27:52.777 clat (usec): min=4120, max=17174, avg=13272.66, stdev=1324.30 00:27:52.777 lat (usec): min=4129, max=17192, avg=13287.46, stdev=1325.50 00:27:52.777 clat percentiles (usec): 00:27:52.777 | 1.00th=[ 8356], 5.00th=[ 9896], 10.00th=[12387], 20.00th=[12780], 00:27:52.777 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13435], 60.00th=[13566], 00:27:52.777 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14484], 95.00th=[14877], 00:27:52.777 | 99.00th=[15664], 99.50th=[16188], 99.90th=[16712], 99.95th=[16909], 00:27:52.777 | 99.99th=[17171] 00:27:52.777 bw ( KiB/s): min=27648, max=30976, per=29.33%, avg=28820.21, stdev=992.64, samples=19 00:27:52.777 iops : min= 216, max= 242, avg=225.16, stdev= 7.76, samples=19 00:27:52.777 lat (msec) : 10=5.09%, 20=94.91% 00:27:52.777 cpu : usr=94.20%, sys=4.36%, ctx=15, majf=0, minf=0 00:27:52.777 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:52.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.777 issued rwts: total=2258,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.777 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:52.777 00:27:52.777 Run status group 0 (all jobs): 00:27:52.777 READ: bw=95.9MiB/s (101MB/s), 28.2MiB/s-36.0MiB/s (29.6MB/s-37.7MB/s), io=960MiB (1007MB), run=10004-10008msec 00:27:52.777 17:26:43 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:27:52.777 17:26:43 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:27:52.777 17:26:43 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:27:52.777 17:26:43 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:52.777 17:26:43 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:27:52.777 17:26:43 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:52.777 17:26:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.777 17:26:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:52.777 17:26:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.777 17:26:43 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:52.777 17:26:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.777 17:26:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:52.777 17:26:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.777 00:27:52.777 real 0m11.055s 00:27:52.777 user 0m28.810s 00:27:52.777 sys 0m1.674s 00:27:52.777 17:26:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:52.777 17:26:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:52.777 ************************************ 00:27:52.777 END TEST fio_dif_digest 00:27:52.777 ************************************ 00:27:52.777 17:26:43 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:27:52.777 17:26:43 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:27:52.777 17:26:43 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:52.777 17:26:43 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:27:52.777 17:26:43 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:52.777 17:26:43 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:27:52.777 17:26:43 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:52.777 17:26:43 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:52.777 rmmod nvme_tcp 00:27:52.777 rmmod nvme_fabrics 00:27:52.777 rmmod nvme_keyring 00:27:52.777 17:26:43 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:52.777 17:26:43 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:27:52.777 17:26:43 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:27:52.777 17:26:43 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 108350 ']' 00:27:52.777 17:26:43 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 108350 00:27:52.777 17:26:43 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 108350 ']' 00:27:52.777 17:26:43 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 108350 00:27:52.777 17:26:43 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:27:52.777 17:26:43 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:52.777 17:26:43 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108350 00:27:52.777 17:26:43 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:52.777 killing process with pid 108350 00:27:52.777 17:26:43 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:52.777 17:26:43 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108350' 00:27:52.777 17:26:43 nvmf_dif -- common/autotest_common.sh@973 -- # kill 108350 00:27:52.777 17:26:43 nvmf_dif -- common/autotest_common.sh@978 -- # wait 108350 00:27:52.777 17:26:43 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:27:52.777 17:26:43 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:52.777 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:52.777 Waiting for block devices as requested 00:27:52.777 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:52.777 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:52.777 17:26:44 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:52.777 17:26:44 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:52.777 17:26:44 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:27:52.777 17:26:44 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:27:52.777 17:26:44 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:52.777 17:26:44 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:27:52.777 17:26:44 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:52.777 17:26:44 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:52.777 17:26:44 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:52.777 17:26:44 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:52.777 17:26:44 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:52.777 17:26:44 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:52.777 17:26:44 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:52.777 17:26:44 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:52.777 17:26:44 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:52.777 17:26:44 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:52.778 17:26:44 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:52.778 17:26:44 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:52.778 17:26:44 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:52.778 17:26:44 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:52.778 17:26:44 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:52.778 17:26:44 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:52.778 17:26:44 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.778 17:26:44 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:52.778 17:26:44 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.778 17:26:44 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:27:52.778 00:27:52.778 real 1m11.565s 00:27:52.778 user 5m37.828s 00:27:52.778 sys 0m14.715s 00:27:52.778 17:26:44 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:52.778 17:26:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:52.778 ************************************ 00:27:52.778 END TEST nvmf_dif 00:27:52.778 ************************************ 00:27:52.778 17:26:44 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:52.778 17:26:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:52.778 17:26:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:52.778 17:26:44 -- common/autotest_common.sh@10 -- # set +x 00:27:52.778 ************************************ 00:27:52.778 START TEST nvmf_abort_qd_sizes 00:27:52.778 ************************************ 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:52.778 * Looking for test storage... 00:27:52.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:52.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.778 --rc genhtml_branch_coverage=1 00:27:52.778 --rc genhtml_function_coverage=1 00:27:52.778 --rc genhtml_legend=1 00:27:52.778 --rc geninfo_all_blocks=1 00:27:52.778 --rc geninfo_unexecuted_blocks=1 00:27:52.778 00:27:52.778 ' 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:52.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.778 --rc genhtml_branch_coverage=1 00:27:52.778 --rc genhtml_function_coverage=1 00:27:52.778 --rc genhtml_legend=1 00:27:52.778 --rc geninfo_all_blocks=1 00:27:52.778 --rc geninfo_unexecuted_blocks=1 00:27:52.778 00:27:52.778 ' 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:52.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.778 --rc genhtml_branch_coverage=1 00:27:52.778 --rc genhtml_function_coverage=1 00:27:52.778 --rc genhtml_legend=1 00:27:52.778 --rc geninfo_all_blocks=1 00:27:52.778 --rc geninfo_unexecuted_blocks=1 00:27:52.778 00:27:52.778 ' 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:52.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.778 --rc genhtml_branch_coverage=1 00:27:52.778 --rc genhtml_function_coverage=1 00:27:52.778 --rc genhtml_legend=1 00:27:52.778 --rc geninfo_all_blocks=1 00:27:52.778 --rc geninfo_unexecuted_blocks=1 00:27:52.778 00:27:52.778 ' 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:52.778 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:52.778 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:52.779 Cannot find device "nvmf_init_br" 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:52.779 Cannot find device "nvmf_init_br2" 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:52.779 Cannot find device "nvmf_tgt_br" 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:52.779 Cannot find device "nvmf_tgt_br2" 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:52.779 Cannot find device "nvmf_init_br" 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:52.779 Cannot find device "nvmf_init_br2" 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:27:52.779 17:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:52.779 Cannot find device "nvmf_tgt_br" 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:52.779 Cannot find device "nvmf_tgt_br2" 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:52.779 Cannot find device "nvmf_br" 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:52.779 Cannot find device "nvmf_init_if" 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:52.779 Cannot find device "nvmf_init_if2" 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:52.779 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:52.779 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:52.779 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:52.779 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:27:52.779 00:27:52.779 --- 10.0.0.3 ping statistics --- 00:27:52.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.779 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:52.779 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:52.779 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:27:52.779 00:27:52.779 --- 10.0.0.4 ping statistics --- 00:27:52.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.779 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:27:52.779 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:52.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:52.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:27:52.779 00:27:52.779 --- 10.0.0.1 ping statistics --- 00:27:52.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.779 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:27:53.038 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:53.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:53.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:27:53.038 00:27:53.038 --- 10.0.0.2 ping statistics --- 00:27:53.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.038 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:27:53.038 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:53.038 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:27:53.038 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:27:53.038 17:26:45 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:53.605 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:53.605 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:53.605 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:53.864 17:26:46 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:53.864 17:26:46 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:53.864 17:26:46 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:53.864 17:26:46 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:53.864 17:26:46 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:53.864 17:26:46 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:53.864 17:26:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:27:53.864 17:26:46 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:53.864 17:26:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:53.864 17:26:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:53.864 17:26:46 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=109858 00:27:53.864 17:26:46 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 109858 00:27:53.864 17:26:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 109858 ']' 00:27:53.864 17:26:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.864 17:26:46 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:27:53.864 17:26:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:53.864 17:26:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.864 17:26:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:53.864 17:26:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:53.864 [2024-11-19 17:26:46.292716] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:27:53.864 [2024-11-19 17:26:46.292835] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:54.123 [2024-11-19 17:26:46.453651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:54.124 [2024-11-19 17:26:46.519971] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:54.124 [2024-11-19 17:26:46.520057] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:54.124 [2024-11-19 17:26:46.520072] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:54.124 [2024-11-19 17:26:46.520084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:54.124 [2024-11-19 17:26:46.520094] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:54.124 [2024-11-19 17:26:46.521692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:54.124 [2024-11-19 17:26:46.521779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:54.124 [2024-11-19 17:26:46.521891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:54.124 [2024-11-19 17:26:46.521907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:27:54.383 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:27:54.384 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:27:54.384 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:27:54.384 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:27:54.384 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:27:54.384 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:27:54.384 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:27:54.384 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:27:54.384 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:27:54.384 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:27:54.384 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:27:54.384 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:27:54.384 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:27:54.384 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:27:54.384 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:27:54.384 17:26:46 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:27:54.384 17:26:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:27:54.384 17:26:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:27:54.384 17:26:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:27:54.384 17:26:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:54.384 17:26:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:54.384 17:26:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:54.384 ************************************ 00:27:54.384 START TEST spdk_target_abort 00:27:54.384 ************************************ 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:54.384 spdk_targetn1 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:54.384 [2024-11-19 17:26:46.870256] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:54.384 [2024-11-19 17:26:46.902490] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:54.384 17:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:57.721 Initializing NVMe Controllers 00:27:57.721 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:27:57.721 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:57.721 Initialization complete. Launching workers. 00:27:57.721 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10700, failed: 0 00:27:57.721 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1162, failed to submit 9538 00:27:57.721 success 741, unsuccessful 421, failed 0 00:27:57.721 17:26:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:57.721 17:26:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:01.908 Initializing NVMe Controllers 00:28:01.908 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:01.908 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:01.908 Initialization complete. Launching workers. 00:28:01.908 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5920, failed: 0 00:28:01.908 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1227, failed to submit 4693 00:28:01.908 success 234, unsuccessful 993, failed 0 00:28:01.908 17:26:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:01.908 17:26:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:04.440 Initializing NVMe Controllers 00:28:04.440 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:04.440 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:04.440 Initialization complete. Launching workers. 00:28:04.440 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29555, failed: 0 00:28:04.440 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2631, failed to submit 26924 00:28:04.440 success 359, unsuccessful 2272, failed 0 00:28:04.440 17:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:04.440 17:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.440 17:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:04.440 17:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.440 17:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:04.440 17:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.440 17:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:05.007 17:26:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.007 17:26:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 109858 00:28:05.007 17:26:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 109858 ']' 00:28:05.007 17:26:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 109858 00:28:05.007 17:26:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:28:05.007 17:26:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:05.007 17:26:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109858 00:28:05.007 17:26:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:05.007 17:26:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:05.007 17:26:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109858' 00:28:05.007 killing process with pid 109858 00:28:05.007 17:26:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 109858 00:28:05.007 17:26:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 109858 00:28:05.266 00:28:05.266 real 0m10.900s 00:28:05.266 user 0m42.301s 00:28:05.266 sys 0m1.720s 00:28:05.266 17:26:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:05.266 17:26:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:05.266 ************************************ 00:28:05.266 END TEST spdk_target_abort 00:28:05.266 ************************************ 00:28:05.266 17:26:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:05.266 17:26:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:05.266 17:26:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:05.266 17:26:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:05.266 ************************************ 00:28:05.266 START TEST kernel_target_abort 00:28:05.266 ************************************ 00:28:05.266 17:26:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:28:05.266 17:26:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:05.266 17:26:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:28:05.266 17:26:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:05.266 17:26:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:05.266 17:26:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.266 17:26:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.266 17:26:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:05.266 17:26:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.266 17:26:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:05.266 17:26:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:05.266 17:26:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:05.266 17:26:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:05.266 17:26:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:05.266 17:26:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:05.266 17:26:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:05.266 17:26:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:05.266 17:26:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:05.266 17:26:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:28:05.266 17:26:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:05.266 17:26:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:05.266 17:26:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:05.266 17:26:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:05.832 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:05.832 Waiting for block devices as requested 00:28:05.832 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:05.832 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:06.089 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:06.089 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:06.089 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:06.089 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:06.089 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:06.089 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:06.089 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:06.089 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:06.089 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:28:06.089 No valid GPT data, bailing 00:28:06.089 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:06.089 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:06.089 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:06.089 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:06.089 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:06.089 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:28:06.089 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:28:06.089 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:28:06.090 No valid GPT data, bailing 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:28:06.090 No valid GPT data, bailing 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:28:06.090 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:28:06.348 No valid GPT data, bailing 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 --hostid=838cb2d2-b846-4910-b58d-4ab4444a5e07 -a 10.0.0.1 -t tcp -s 4420 00:28:06.348 00:28:06.348 Discovery Log Number of Records 2, Generation counter 2 00:28:06.348 =====Discovery Log Entry 0====== 00:28:06.348 trtype: tcp 00:28:06.348 adrfam: ipv4 00:28:06.348 subtype: current discovery subsystem 00:28:06.348 treq: not specified, sq flow control disable supported 00:28:06.348 portid: 1 00:28:06.348 trsvcid: 4420 00:28:06.348 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:06.348 traddr: 10.0.0.1 00:28:06.348 eflags: none 00:28:06.348 sectype: none 00:28:06.348 =====Discovery Log Entry 1====== 00:28:06.348 trtype: tcp 00:28:06.348 adrfam: ipv4 00:28:06.348 subtype: nvme subsystem 00:28:06.348 treq: not specified, sq flow control disable supported 00:28:06.348 portid: 1 00:28:06.348 trsvcid: 4420 00:28:06.348 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:06.348 traddr: 10.0.0.1 00:28:06.348 eflags: none 00:28:06.348 sectype: none 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:06.348 17:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:09.633 Initializing NVMe Controllers 00:28:09.633 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:09.633 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:09.633 Initialization complete. Launching workers. 00:28:09.633 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35055, failed: 0 00:28:09.633 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35055, failed to submit 0 00:28:09.634 success 0, unsuccessful 35055, failed 0 00:28:09.634 17:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:09.634 17:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:12.920 Initializing NVMe Controllers 00:28:12.920 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:12.920 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:12.920 Initialization complete. Launching workers. 00:28:12.920 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67430, failed: 0 00:28:12.920 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28028, failed to submit 39402 00:28:12.920 success 0, unsuccessful 28028, failed 0 00:28:12.920 17:27:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:12.920 17:27:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:16.208 Initializing NVMe Controllers 00:28:16.208 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:16.208 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:16.208 Initialization complete. Launching workers. 00:28:16.208 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 79769, failed: 0 00:28:16.208 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19916, failed to submit 59853 00:28:16.208 success 0, unsuccessful 19916, failed 0 00:28:16.208 17:27:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:16.208 17:27:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:16.208 17:27:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:28:16.208 17:27:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:16.208 17:27:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:16.208 17:27:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:16.208 17:27:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:16.208 17:27:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:16.208 17:27:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:16.209 17:27:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:16.777 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:18.678 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:18.678 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:18.678 ************************************ 00:28:18.678 END TEST kernel_target_abort 00:28:18.678 ************************************ 00:28:18.678 00:28:18.678 real 0m13.198s 00:28:18.678 user 0m5.680s 00:28:18.678 sys 0m4.839s 00:28:18.678 17:27:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:18.678 17:27:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:18.678 17:27:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:18.678 17:27:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:18.678 17:27:10 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:18.678 17:27:10 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:28:18.678 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:18.678 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:28:18.678 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:18.678 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:18.678 rmmod nvme_tcp 00:28:18.678 rmmod nvme_fabrics 00:28:18.678 rmmod nvme_keyring 00:28:18.678 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:18.678 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:28:18.678 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:28:18.678 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 109858 ']' 00:28:18.678 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 109858 00:28:18.678 17:27:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 109858 ']' 00:28:18.678 17:27:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 109858 00:28:18.678 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (109858) - No such process 00:28:18.678 Process with pid 109858 is not found 00:28:18.678 17:27:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 109858 is not found' 00:28:18.678 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:28:18.678 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:18.938 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:19.196 Waiting for block devices as requested 00:28:19.196 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:19.196 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:19.455 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:19.455 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:19.455 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:28:19.455 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:28:19.455 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:19.455 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:28:19.455 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:19.455 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:19.455 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:19.455 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:19.455 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:19.455 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:19.455 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:19.455 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:19.455 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:19.455 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:19.455 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:19.455 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:19.455 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:19.455 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:19.455 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:19.455 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:19.455 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.455 17:27:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:19.455 17:27:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.713 17:27:11 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:28:19.713 00:28:19.713 real 0m27.303s 00:28:19.713 user 0m49.169s 00:28:19.713 sys 0m8.060s 00:28:19.713 17:27:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:19.713 17:27:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:19.713 ************************************ 00:28:19.713 END TEST nvmf_abort_qd_sizes 00:28:19.713 ************************************ 00:28:19.713 17:27:12 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:19.713 17:27:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:19.713 17:27:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:19.713 17:27:12 -- common/autotest_common.sh@10 -- # set +x 00:28:19.713 ************************************ 00:28:19.713 START TEST keyring_file 00:28:19.713 ************************************ 00:28:19.713 17:27:12 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:19.713 * Looking for test storage... 00:28:19.713 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:28:19.713 17:27:12 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:19.713 17:27:12 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:28:19.713 17:27:12 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:19.713 17:27:12 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:19.713 17:27:12 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:19.713 17:27:12 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:19.972 17:27:12 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:19.972 17:27:12 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:28:19.972 17:27:12 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:28:19.972 17:27:12 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:28:19.972 17:27:12 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:28:19.972 17:27:12 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:28:19.972 17:27:12 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:28:19.972 17:27:12 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:28:19.972 17:27:12 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:19.972 17:27:12 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:28:19.972 17:27:12 keyring_file -- scripts/common.sh@345 -- # : 1 00:28:19.972 17:27:12 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:19.972 17:27:12 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:19.972 17:27:12 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:28:19.972 17:27:12 keyring_file -- scripts/common.sh@353 -- # local d=1 00:28:19.972 17:27:12 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:19.972 17:27:12 keyring_file -- scripts/common.sh@355 -- # echo 1 00:28:19.972 17:27:12 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:28:19.972 17:27:12 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:28:19.972 17:27:12 keyring_file -- scripts/common.sh@353 -- # local d=2 00:28:19.972 17:27:12 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:19.972 17:27:12 keyring_file -- scripts/common.sh@355 -- # echo 2 00:28:19.972 17:27:12 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:28:19.972 17:27:12 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:19.972 17:27:12 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:19.972 17:27:12 keyring_file -- scripts/common.sh@368 -- # return 0 00:28:19.972 17:27:12 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:19.972 17:27:12 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:19.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.973 --rc genhtml_branch_coverage=1 00:28:19.973 --rc genhtml_function_coverage=1 00:28:19.973 --rc genhtml_legend=1 00:28:19.973 --rc geninfo_all_blocks=1 00:28:19.973 --rc geninfo_unexecuted_blocks=1 00:28:19.973 00:28:19.973 ' 00:28:19.973 17:27:12 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:19.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.973 --rc genhtml_branch_coverage=1 00:28:19.973 --rc genhtml_function_coverage=1 00:28:19.973 --rc genhtml_legend=1 00:28:19.973 --rc geninfo_all_blocks=1 00:28:19.973 --rc geninfo_unexecuted_blocks=1 00:28:19.973 00:28:19.973 ' 00:28:19.973 17:27:12 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:19.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.973 --rc genhtml_branch_coverage=1 00:28:19.973 --rc genhtml_function_coverage=1 00:28:19.973 --rc genhtml_legend=1 00:28:19.973 --rc geninfo_all_blocks=1 00:28:19.973 --rc geninfo_unexecuted_blocks=1 00:28:19.973 00:28:19.973 ' 00:28:19.973 17:27:12 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:19.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.973 --rc genhtml_branch_coverage=1 00:28:19.973 --rc genhtml_function_coverage=1 00:28:19.973 --rc genhtml_legend=1 00:28:19.973 --rc geninfo_all_blocks=1 00:28:19.973 --rc geninfo_unexecuted_blocks=1 00:28:19.973 00:28:19.973 ' 00:28:19.973 17:27:12 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:28:19.973 17:27:12 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:19.973 17:27:12 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:28:19.973 17:27:12 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.973 17:27:12 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.973 17:27:12 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.973 17:27:12 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.973 17:27:12 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.973 17:27:12 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.973 17:27:12 keyring_file -- paths/export.sh@5 -- # export PATH 00:28:19.973 17:27:12 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@51 -- # : 0 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:19.973 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:19.973 17:27:12 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:19.973 17:27:12 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:19.973 17:27:12 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:19.973 17:27:12 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:28:19.973 17:27:12 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:28:19.973 17:27:12 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:28:19.973 17:27:12 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:19.973 17:27:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:19.973 17:27:12 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:19.973 17:27:12 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:19.973 17:27:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:19.973 17:27:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:19.973 17:27:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Ov81BTZuI5 00:28:19.973 17:27:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@733 -- # python - 00:28:19.973 17:27:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Ov81BTZuI5 00:28:19.973 17:27:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Ov81BTZuI5 00:28:19.973 17:27:12 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Ov81BTZuI5 00:28:19.973 17:27:12 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:28:19.973 17:27:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:19.973 17:27:12 keyring_file -- keyring/common.sh@17 -- # name=key1 00:28:19.973 17:27:12 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:19.973 17:27:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:19.973 17:27:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:19.973 17:27:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.GBRi6ajnlB 00:28:19.973 17:27:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:28:19.973 17:27:12 keyring_file -- nvmf/common.sh@733 -- # python - 00:28:19.973 17:27:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.GBRi6ajnlB 00:28:19.973 17:27:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.GBRi6ajnlB 00:28:19.973 17:27:12 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.GBRi6ajnlB 00:28:19.973 17:27:12 keyring_file -- keyring/file.sh@30 -- # tgtpid=110767 00:28:19.973 17:27:12 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:19.973 17:27:12 keyring_file -- keyring/file.sh@32 -- # waitforlisten 110767 00:28:19.973 17:27:12 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 110767 ']' 00:28:19.973 17:27:12 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.973 17:27:12 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:19.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.973 17:27:12 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.973 17:27:12 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:19.973 17:27:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:19.973 [2024-11-19 17:27:12.495996] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:28:19.973 [2024-11-19 17:27:12.496104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110767 ] 00:28:20.231 [2024-11-19 17:27:12.647444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.231 [2024-11-19 17:27:12.712564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.798 17:27:13 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:20.798 17:27:13 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:28:20.798 17:27:13 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:28:20.798 17:27:13 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.798 17:27:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:20.798 [2024-11-19 17:27:13.081703] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:20.798 null0 00:28:20.798 [2024-11-19 17:27:13.113681] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:20.798 [2024-11-19 17:27:13.113891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:20.798 17:27:13 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.798 17:27:13 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:20.798 17:27:13 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:20.798 17:27:13 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:20.798 17:27:13 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:20.798 17:27:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:20.798 17:27:13 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:20.798 17:27:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:20.798 17:27:13 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:20.798 17:27:13 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.798 17:27:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:20.799 [2024-11-19 17:27:13.145666] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:28:20.799 2024/11/19 17:27:13 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:28:20.799 request: 00:28:20.799 { 00:28:20.799 "method": "nvmf_subsystem_add_listener", 00:28:20.799 "params": { 00:28:20.799 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:28:20.799 "secure_channel": false, 00:28:20.799 "listen_address": { 00:28:20.799 "trtype": "tcp", 00:28:20.799 "traddr": "127.0.0.1", 00:28:20.799 "trsvcid": "4420" 00:28:20.799 } 00:28:20.799 } 00:28:20.799 } 00:28:20.799 Got JSON-RPC error response 00:28:20.799 GoRPCClient: error on JSON-RPC call 00:28:20.799 17:27:13 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:20.799 17:27:13 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:20.799 17:27:13 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:20.799 17:27:13 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:20.799 17:27:13 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:20.799 17:27:13 keyring_file -- keyring/file.sh@47 -- # bperfpid=110790 00:28:20.799 17:27:13 keyring_file -- keyring/file.sh@49 -- # waitforlisten 110790 /var/tmp/bperf.sock 00:28:20.799 17:27:13 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:28:20.799 17:27:13 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 110790 ']' 00:28:20.799 17:27:13 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:20.799 17:27:13 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:20.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:20.799 17:27:13 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:20.799 17:27:13 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:20.799 17:27:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:20.799 [2024-11-19 17:27:13.213758] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:28:20.799 [2024-11-19 17:27:13.213849] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110790 ] 00:28:21.056 [2024-11-19 17:27:13.367278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.056 [2024-11-19 17:27:13.427969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.056 17:27:13 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:21.056 17:27:13 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:28:21.056 17:27:13 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ov81BTZuI5 00:28:21.056 17:27:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ov81BTZuI5 00:28:21.313 17:27:13 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.GBRi6ajnlB 00:28:21.313 17:27:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.GBRi6ajnlB 00:28:21.877 17:27:14 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:28:21.877 17:27:14 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:28:21.877 17:27:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:21.877 17:27:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:21.877 17:27:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:21.877 17:27:14 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Ov81BTZuI5 == \/\t\m\p\/\t\m\p\.\O\v\8\1\B\T\Z\u\I\5 ]] 00:28:21.877 17:27:14 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:28:21.877 17:27:14 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:28:21.878 17:27:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:21.878 17:27:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:21.878 17:27:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:22.136 17:27:14 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.GBRi6ajnlB == \/\t\m\p\/\t\m\p\.\G\B\R\i\6\a\j\n\l\B ]] 00:28:22.136 17:27:14 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:28:22.136 17:27:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:22.136 17:27:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:22.136 17:27:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:22.136 17:27:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:22.136 17:27:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:22.702 17:27:14 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:28:22.702 17:27:14 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:28:22.702 17:27:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:22.702 17:27:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:22.702 17:27:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:22.702 17:27:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:22.702 17:27:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:22.960 17:27:15 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:28:22.960 17:27:15 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:22.960 17:27:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:23.218 [2024-11-19 17:27:15.540491] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:23.218 nvme0n1 00:28:23.218 17:27:15 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:28:23.218 17:27:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:23.218 17:27:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:23.218 17:27:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:23.218 17:27:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:23.218 17:27:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:23.477 17:27:15 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:28:23.477 17:27:15 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:28:23.477 17:27:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:23.477 17:27:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:23.477 17:27:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:23.477 17:27:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:23.477 17:27:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:23.735 17:27:16 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:28:23.735 17:27:16 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:23.735 Running I/O for 1 seconds... 00:28:24.670 13312.00 IOPS, 52.00 MiB/s 00:28:24.670 Latency(us) 00:28:24.670 [2024-11-19T17:27:17.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.670 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:28:24.670 nvme0n1 : 1.01 13365.49 52.21 0.00 0.00 9553.45 3321.48 13881.72 00:28:24.670 [2024-11-19T17:27:17.203Z] =================================================================================================================== 00:28:24.670 [2024-11-19T17:27:17.203Z] Total : 13365.49 52.21 0.00 0.00 9553.45 3321.48 13881.72 00:28:24.670 { 00:28:24.670 "results": [ 00:28:24.670 { 00:28:24.670 "job": "nvme0n1", 00:28:24.670 "core_mask": "0x2", 00:28:24.670 "workload": "randrw", 00:28:24.670 "percentage": 50, 00:28:24.670 "status": "finished", 00:28:24.670 "queue_depth": 128, 00:28:24.670 "io_size": 4096, 00:28:24.670 "runtime": 1.005575, 00:28:24.670 "iops": 13365.487407702061, 00:28:24.670 "mibps": 52.20893518633618, 00:28:24.670 "io_failed": 0, 00:28:24.670 "io_timeout": 0, 00:28:24.670 "avg_latency_us": 9553.450666666666, 00:28:24.670 "min_latency_us": 3321.4836363636364, 00:28:24.670 "max_latency_us": 13881.716363636364 00:28:24.670 } 00:28:24.670 ], 00:28:24.670 "core_count": 1 00:28:24.670 } 00:28:24.670 17:27:17 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:24.670 17:27:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:25.237 17:27:17 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:28:25.237 17:27:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:25.237 17:27:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:25.237 17:27:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:25.237 17:27:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:25.237 17:27:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:25.237 17:27:17 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:28:25.237 17:27:17 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:28:25.237 17:27:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:25.237 17:27:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:25.237 17:27:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:25.496 17:27:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:25.496 17:27:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:25.754 17:27:18 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:28:25.755 17:27:18 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:25.755 17:27:18 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:25.755 17:27:18 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:25.755 17:27:18 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:25.755 17:27:18 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:25.755 17:27:18 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:25.755 17:27:18 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:25.755 17:27:18 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:25.755 17:27:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:25.755 [2024-11-19 17:27:18.277628] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:25.755 [2024-11-19 17:27:18.278519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x998fd0 (107): Transport endpoint is not connected 00:28:25.755 [2024-11-19 17:27:18.279510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x998fd0 (9): Bad file descriptor 00:28:25.755 [2024-11-19 17:27:18.280507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:28:25.755 [2024-11-19 17:27:18.280535] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:25.755 [2024-11-19 17:27:18.280546] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:28:25.755 [2024-11-19 17:27:18.280557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:28:25.755 2024/11/19 17:27:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:28:25.755 request: 00:28:25.755 { 00:28:25.755 "method": "bdev_nvme_attach_controller", 00:28:25.755 "params": { 00:28:25.755 "name": "nvme0", 00:28:25.755 "trtype": "tcp", 00:28:25.755 "traddr": "127.0.0.1", 00:28:25.755 "adrfam": "ipv4", 00:28:25.755 "trsvcid": "4420", 00:28:25.755 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:25.755 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:25.755 "prchk_reftag": false, 00:28:25.755 "prchk_guard": false, 00:28:25.755 "hdgst": false, 00:28:25.755 "ddgst": false, 00:28:25.755 "psk": "key1", 00:28:25.755 "allow_unrecognized_csi": false 00:28:25.755 } 00:28:25.755 } 00:28:25.755 Got JSON-RPC error response 00:28:25.755 GoRPCClient: error on JSON-RPC call 00:28:26.013 17:27:18 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:26.013 17:27:18 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:26.013 17:27:18 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:26.013 17:27:18 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:26.013 17:27:18 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:28:26.013 17:27:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:26.013 17:27:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:26.013 17:27:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:26.013 17:27:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:26.013 17:27:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:26.285 17:27:18 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:28:26.285 17:27:18 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:28:26.285 17:27:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:26.285 17:27:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:26.285 17:27:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:26.285 17:27:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:26.285 17:27:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:26.558 17:27:18 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:28:26.558 17:27:18 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:28:26.558 17:27:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:26.558 17:27:19 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:28:26.558 17:27:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:28:26.828 17:27:19 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:28:26.828 17:27:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:26.828 17:27:19 keyring_file -- keyring/file.sh@78 -- # jq length 00:28:27.087 17:27:19 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:28:27.087 17:27:19 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.Ov81BTZuI5 00:28:27.087 17:27:19 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ov81BTZuI5 00:28:27.087 17:27:19 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:27.087 17:27:19 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ov81BTZuI5 00:28:27.087 17:27:19 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:27.087 17:27:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:27.087 17:27:19 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:27.087 17:27:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:27.087 17:27:19 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ov81BTZuI5 00:28:27.087 17:27:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ov81BTZuI5 00:28:27.345 [2024-11-19 17:27:19.795086] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Ov81BTZuI5': 0100660 00:28:27.345 [2024-11-19 17:27:19.795133] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:28:27.345 2024/11/19 17:27:19 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.Ov81BTZuI5], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:28:27.345 request: 00:28:27.345 { 00:28:27.345 "method": "keyring_file_add_key", 00:28:27.345 "params": { 00:28:27.345 "name": "key0", 00:28:27.345 "path": "/tmp/tmp.Ov81BTZuI5" 00:28:27.345 } 00:28:27.345 } 00:28:27.345 Got JSON-RPC error response 00:28:27.345 GoRPCClient: error on JSON-RPC call 00:28:27.345 17:27:19 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:27.345 17:27:19 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:27.345 17:27:19 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:27.345 17:27:19 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:27.345 17:27:19 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.Ov81BTZuI5 00:28:27.345 17:27:19 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ov81BTZuI5 00:28:27.345 17:27:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ov81BTZuI5 00:28:27.603 17:27:20 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.Ov81BTZuI5 00:28:27.603 17:27:20 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:28:27.603 17:27:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:27.603 17:27:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:27.603 17:27:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:27.603 17:27:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:27.603 17:27:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:27.861 17:27:20 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:28:27.861 17:27:20 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:27.861 17:27:20 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:27.861 17:27:20 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:27.861 17:27:20 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:27.861 17:27:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:27.861 17:27:20 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:27.861 17:27:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:27.862 17:27:20 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:27.862 17:27:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:28.120 [2024-11-19 17:27:20.527236] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Ov81BTZuI5': No such file or directory 00:28:28.120 [2024-11-19 17:27:20.527276] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:28:28.120 [2024-11-19 17:27:20.527296] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:28:28.120 [2024-11-19 17:27:20.527305] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:28:28.120 [2024-11-19 17:27:20.527315] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:28.120 [2024-11-19 17:27:20.527325] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:28:28.120 2024/11/19 17:27:20 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:28:28.120 request: 00:28:28.120 { 00:28:28.120 "method": "bdev_nvme_attach_controller", 00:28:28.120 "params": { 00:28:28.120 "name": "nvme0", 00:28:28.120 "trtype": "tcp", 00:28:28.120 "traddr": "127.0.0.1", 00:28:28.120 "adrfam": "ipv4", 00:28:28.120 "trsvcid": "4420", 00:28:28.120 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:28.120 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:28.120 "prchk_reftag": false, 00:28:28.120 "prchk_guard": false, 00:28:28.120 "hdgst": false, 00:28:28.120 "ddgst": false, 00:28:28.120 "psk": "key0", 00:28:28.120 "allow_unrecognized_csi": false 00:28:28.120 } 00:28:28.120 } 00:28:28.120 Got JSON-RPC error response 00:28:28.120 GoRPCClient: error on JSON-RPC call 00:28:28.120 17:27:20 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:28.120 17:27:20 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:28.120 17:27:20 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:28.120 17:27:20 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:28.120 17:27:20 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:28:28.120 17:27:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:28.379 17:27:20 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:28.379 17:27:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:28.379 17:27:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:28.379 17:27:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:28.379 17:27:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:28.379 17:27:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:28.379 17:27:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.2X4Pwnz3DZ 00:28:28.379 17:27:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:28.379 17:27:20 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:28.379 17:27:20 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:28:28.379 17:27:20 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:28.379 17:27:20 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:28:28.379 17:27:20 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:28:28.379 17:27:20 keyring_file -- nvmf/common.sh@733 -- # python - 00:28:28.379 17:27:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.2X4Pwnz3DZ 00:28:28.379 17:27:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.2X4Pwnz3DZ 00:28:28.379 17:27:20 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.2X4Pwnz3DZ 00:28:28.379 17:27:20 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2X4Pwnz3DZ 00:28:28.379 17:27:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2X4Pwnz3DZ 00:28:28.638 17:27:21 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:28.638 17:27:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:28.897 nvme0n1 00:28:29.156 17:27:21 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:28:29.156 17:27:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:29.156 17:27:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:29.156 17:27:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:29.156 17:27:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:29.156 17:27:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:29.415 17:27:21 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:28:29.415 17:27:21 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:28:29.415 17:27:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:29.673 17:27:21 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:28:29.673 17:27:21 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:28:29.673 17:27:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:29.673 17:27:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:29.673 17:27:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:29.932 17:27:22 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:28:29.932 17:27:22 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:28:29.932 17:27:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:29.932 17:27:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:29.932 17:27:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:29.932 17:27:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:29.932 17:27:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:30.191 17:27:22 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:28:30.191 17:27:22 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:30.191 17:27:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:30.450 17:27:22 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:28:30.450 17:27:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:30.450 17:27:22 keyring_file -- keyring/file.sh@105 -- # jq length 00:28:30.708 17:27:23 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:28:30.708 17:27:23 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2X4Pwnz3DZ 00:28:30.708 17:27:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2X4Pwnz3DZ 00:28:30.708 17:27:23 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.GBRi6ajnlB 00:28:30.708 17:27:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.GBRi6ajnlB 00:28:30.967 17:27:23 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:30.967 17:27:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:31.534 nvme0n1 00:28:31.534 17:27:23 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:28:31.534 17:27:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:28:31.793 17:27:24 keyring_file -- keyring/file.sh@113 -- # config='{ 00:28:31.793 "subsystems": [ 00:28:31.793 { 00:28:31.793 "subsystem": "keyring", 00:28:31.793 "config": [ 00:28:31.793 { 00:28:31.793 "method": "keyring_file_add_key", 00:28:31.793 "params": { 00:28:31.793 "name": "key0", 00:28:31.793 "path": "/tmp/tmp.2X4Pwnz3DZ" 00:28:31.793 } 00:28:31.793 }, 00:28:31.793 { 00:28:31.793 "method": "keyring_file_add_key", 00:28:31.793 "params": { 00:28:31.793 "name": "key1", 00:28:31.793 "path": "/tmp/tmp.GBRi6ajnlB" 00:28:31.793 } 00:28:31.793 } 00:28:31.793 ] 00:28:31.793 }, 00:28:31.793 { 00:28:31.793 "subsystem": "iobuf", 00:28:31.793 "config": [ 00:28:31.793 { 00:28:31.793 "method": "iobuf_set_options", 00:28:31.793 "params": { 00:28:31.793 "enable_numa": false, 00:28:31.793 "large_bufsize": 135168, 00:28:31.793 "large_pool_count": 1024, 00:28:31.793 "small_bufsize": 8192, 00:28:31.793 "small_pool_count": 8192 00:28:31.793 } 00:28:31.793 } 00:28:31.793 ] 00:28:31.793 }, 00:28:31.793 { 00:28:31.793 "subsystem": "sock", 00:28:31.793 "config": [ 00:28:31.793 { 00:28:31.793 "method": "sock_set_default_impl", 00:28:31.793 "params": { 00:28:31.793 "impl_name": "posix" 00:28:31.793 } 00:28:31.793 }, 00:28:31.793 { 00:28:31.793 "method": "sock_impl_set_options", 00:28:31.793 "params": { 00:28:31.793 "enable_ktls": false, 00:28:31.793 "enable_placement_id": 0, 00:28:31.793 "enable_quickack": false, 00:28:31.793 "enable_recv_pipe": true, 00:28:31.793 "enable_zerocopy_send_client": false, 00:28:31.793 "enable_zerocopy_send_server": true, 00:28:31.793 "impl_name": "ssl", 00:28:31.793 "recv_buf_size": 4096, 00:28:31.793 "send_buf_size": 4096, 00:28:31.793 "tls_version": 0, 00:28:31.793 "zerocopy_threshold": 0 00:28:31.793 } 00:28:31.793 }, 00:28:31.793 { 00:28:31.793 "method": "sock_impl_set_options", 00:28:31.793 "params": { 00:28:31.793 "enable_ktls": false, 00:28:31.793 "enable_placement_id": 0, 00:28:31.793 "enable_quickack": false, 00:28:31.793 "enable_recv_pipe": true, 00:28:31.793 "enable_zerocopy_send_client": false, 00:28:31.793 "enable_zerocopy_send_server": true, 00:28:31.793 "impl_name": "posix", 00:28:31.793 "recv_buf_size": 2097152, 00:28:31.793 "send_buf_size": 2097152, 00:28:31.793 "tls_version": 0, 00:28:31.793 "zerocopy_threshold": 0 00:28:31.793 } 00:28:31.793 } 00:28:31.793 ] 00:28:31.793 }, 00:28:31.793 { 00:28:31.793 "subsystem": "vmd", 00:28:31.793 "config": [] 00:28:31.793 }, 00:28:31.793 { 00:28:31.793 "subsystem": "accel", 00:28:31.793 "config": [ 00:28:31.793 { 00:28:31.793 "method": "accel_set_options", 00:28:31.793 "params": { 00:28:31.793 "buf_count": 2048, 00:28:31.793 "large_cache_size": 16, 00:28:31.793 "sequence_count": 2048, 00:28:31.793 "small_cache_size": 128, 00:28:31.793 "task_count": 2048 00:28:31.793 } 00:28:31.793 } 00:28:31.793 ] 00:28:31.793 }, 00:28:31.793 { 00:28:31.793 "subsystem": "bdev", 00:28:31.793 "config": [ 00:28:31.793 { 00:28:31.793 "method": "bdev_set_options", 00:28:31.793 "params": { 00:28:31.793 "bdev_auto_examine": true, 00:28:31.793 "bdev_io_cache_size": 256, 00:28:31.793 "bdev_io_pool_size": 65535, 00:28:31.793 "iobuf_large_cache_size": 16, 00:28:31.793 "iobuf_small_cache_size": 128 00:28:31.793 } 00:28:31.793 }, 00:28:31.793 { 00:28:31.793 "method": "bdev_raid_set_options", 00:28:31.793 "params": { 00:28:31.793 "process_max_bandwidth_mb_sec": 0, 00:28:31.793 "process_window_size_kb": 1024 00:28:31.793 } 00:28:31.793 }, 00:28:31.793 { 00:28:31.793 "method": "bdev_iscsi_set_options", 00:28:31.793 "params": { 00:28:31.793 "timeout_sec": 30 00:28:31.793 } 00:28:31.793 }, 00:28:31.793 { 00:28:31.793 "method": "bdev_nvme_set_options", 00:28:31.793 "params": { 00:28:31.793 "action_on_timeout": "none", 00:28:31.793 "allow_accel_sequence": false, 00:28:31.793 "arbitration_burst": 0, 00:28:31.793 "bdev_retry_count": 3, 00:28:31.793 "ctrlr_loss_timeout_sec": 0, 00:28:31.793 "delay_cmd_submit": true, 00:28:31.793 "dhchap_dhgroups": [ 00:28:31.793 "null", 00:28:31.793 "ffdhe2048", 00:28:31.793 "ffdhe3072", 00:28:31.793 "ffdhe4096", 00:28:31.793 "ffdhe6144", 00:28:31.793 "ffdhe8192" 00:28:31.793 ], 00:28:31.793 "dhchap_digests": [ 00:28:31.793 "sha256", 00:28:31.793 "sha384", 00:28:31.793 "sha512" 00:28:31.793 ], 00:28:31.793 "disable_auto_failback": false, 00:28:31.793 "fast_io_fail_timeout_sec": 0, 00:28:31.793 "generate_uuids": false, 00:28:31.793 "high_priority_weight": 0, 00:28:31.793 "io_path_stat": false, 00:28:31.793 "io_queue_requests": 512, 00:28:31.793 "keep_alive_timeout_ms": 10000, 00:28:31.793 "low_priority_weight": 0, 00:28:31.793 "medium_priority_weight": 0, 00:28:31.793 "nvme_adminq_poll_period_us": 10000, 00:28:31.793 "nvme_error_stat": false, 00:28:31.793 "nvme_ioq_poll_period_us": 0, 00:28:31.793 "rdma_cm_event_timeout_ms": 0, 00:28:31.793 "rdma_max_cq_size": 0, 00:28:31.793 "rdma_srq_size": 0, 00:28:31.793 "reconnect_delay_sec": 0, 00:28:31.793 "timeout_admin_us": 0, 00:28:31.793 "timeout_us": 0, 00:28:31.793 "transport_ack_timeout": 0, 00:28:31.794 "transport_retry_count": 4, 00:28:31.794 "transport_tos": 0 00:28:31.794 } 00:28:31.794 }, 00:28:31.794 { 00:28:31.794 "method": "bdev_nvme_attach_controller", 00:28:31.794 "params": { 00:28:31.794 "adrfam": "IPv4", 00:28:31.794 "ctrlr_loss_timeout_sec": 0, 00:28:31.794 "ddgst": false, 00:28:31.794 "fast_io_fail_timeout_sec": 0, 00:28:31.794 "hdgst": false, 00:28:31.794 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:31.794 "multipath": "multipath", 00:28:31.794 "name": "nvme0", 00:28:31.794 "prchk_guard": false, 00:28:31.794 "prchk_reftag": false, 00:28:31.794 "psk": "key0", 00:28:31.794 "reconnect_delay_sec": 0, 00:28:31.794 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:31.794 "traddr": "127.0.0.1", 00:28:31.794 "trsvcid": "4420", 00:28:31.794 "trtype": "TCP" 00:28:31.794 } 00:28:31.794 }, 00:28:31.794 { 00:28:31.794 "method": "bdev_nvme_set_hotplug", 00:28:31.794 "params": { 00:28:31.794 "enable": false, 00:28:31.794 "period_us": 100000 00:28:31.794 } 00:28:31.794 }, 00:28:31.794 { 00:28:31.794 "method": "bdev_wait_for_examine" 00:28:31.794 } 00:28:31.794 ] 00:28:31.794 }, 00:28:31.794 { 00:28:31.794 "subsystem": "nbd", 00:28:31.794 "config": [] 00:28:31.794 } 00:28:31.794 ] 00:28:31.794 }' 00:28:31.794 17:27:24 keyring_file -- keyring/file.sh@115 -- # killprocess 110790 00:28:31.794 17:27:24 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 110790 ']' 00:28:31.794 17:27:24 keyring_file -- common/autotest_common.sh@958 -- # kill -0 110790 00:28:31.794 17:27:24 keyring_file -- common/autotest_common.sh@959 -- # uname 00:28:31.794 17:27:24 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:31.794 17:27:24 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110790 00:28:31.794 17:27:24 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:31.794 17:27:24 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:31.794 killing process with pid 110790 00:28:31.794 17:27:24 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110790' 00:28:31.794 Received shutdown signal, test time was about 1.000000 seconds 00:28:31.794 00:28:31.794 Latency(us) 00:28:31.794 [2024-11-19T17:27:24.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.794 [2024-11-19T17:27:24.327Z] =================================================================================================================== 00:28:31.794 [2024-11-19T17:27:24.327Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:31.794 17:27:24 keyring_file -- common/autotest_common.sh@973 -- # kill 110790 00:28:31.794 17:27:24 keyring_file -- common/autotest_common.sh@978 -- # wait 110790 00:28:32.053 17:27:24 keyring_file -- keyring/file.sh@118 -- # bperfpid=111248 00:28:32.053 17:27:24 keyring_file -- keyring/file.sh@120 -- # waitforlisten 111248 /var/tmp/bperf.sock 00:28:32.053 17:27:24 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:28:32.053 17:27:24 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 111248 ']' 00:28:32.053 17:27:24 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:28:32.053 "subsystems": [ 00:28:32.053 { 00:28:32.053 "subsystem": "keyring", 00:28:32.053 "config": [ 00:28:32.053 { 00:28:32.053 "method": "keyring_file_add_key", 00:28:32.053 "params": { 00:28:32.053 "name": "key0", 00:28:32.053 "path": "/tmp/tmp.2X4Pwnz3DZ" 00:28:32.053 } 00:28:32.053 }, 00:28:32.053 { 00:28:32.053 "method": "keyring_file_add_key", 00:28:32.053 "params": { 00:28:32.053 "name": "key1", 00:28:32.053 "path": "/tmp/tmp.GBRi6ajnlB" 00:28:32.053 } 00:28:32.053 } 00:28:32.053 ] 00:28:32.053 }, 00:28:32.053 { 00:28:32.053 "subsystem": "iobuf", 00:28:32.053 "config": [ 00:28:32.053 { 00:28:32.053 "method": "iobuf_set_options", 00:28:32.053 "params": { 00:28:32.053 "enable_numa": false, 00:28:32.053 "large_bufsize": 135168, 00:28:32.053 "large_pool_count": 1024, 00:28:32.053 "small_bufsize": 8192, 00:28:32.054 "small_pool_count": 8192 00:28:32.054 } 00:28:32.054 } 00:28:32.054 ] 00:28:32.054 }, 00:28:32.054 { 00:28:32.054 "subsystem": "sock", 00:28:32.054 "config": [ 00:28:32.054 { 00:28:32.054 "method": "sock_set_default_impl", 00:28:32.054 "params": { 00:28:32.054 "impl_name": "posix" 00:28:32.054 } 00:28:32.054 }, 00:28:32.054 { 00:28:32.054 "method": "sock_impl_set_options", 00:28:32.054 "params": { 00:28:32.054 "enable_ktls": false, 00:28:32.054 "enable_placement_id": 0, 00:28:32.054 "enable_quickack": false, 00:28:32.054 "enable_recv_pipe": true, 00:28:32.054 "enable_zerocopy_send_client": false, 00:28:32.054 "enable_zerocopy_send_server": true, 00:28:32.054 "impl_name": "ssl", 00:28:32.054 "recv_buf_size": 4096, 00:28:32.054 "send_buf_size": 4096, 00:28:32.054 "tls_version": 0, 00:28:32.054 "zerocopy_threshold": 0 00:28:32.054 } 00:28:32.054 }, 00:28:32.054 { 00:28:32.054 "method": "sock_impl_set_options", 00:28:32.054 "params": { 00:28:32.054 "enable_ktls": false, 00:28:32.054 "enable_placement_id": 0, 00:28:32.054 "enable_quickack": false, 00:28:32.054 "enable_recv_pipe": true, 00:28:32.054 "enable_zerocopy_send_client": false, 00:28:32.054 "enable_zerocopy_send_server": true, 00:28:32.054 "impl_name": "posix", 00:28:32.054 "recv_buf_size": 2097152, 00:28:32.054 "send_buf_size": 2097152, 00:28:32.054 "tls_version": 0, 00:28:32.054 "zerocopy_threshold": 0 00:28:32.054 } 00:28:32.054 } 00:28:32.054 ] 00:28:32.054 }, 00:28:32.054 { 00:28:32.054 "subsystem": "vmd", 00:28:32.054 "config": [] 00:28:32.054 }, 00:28:32.054 { 00:28:32.054 "subsystem": "accel", 00:28:32.054 "config": [ 00:28:32.054 { 00:28:32.054 "method": "accel_set_options", 00:28:32.054 "params": { 00:28:32.054 "buf_count": 2048, 00:28:32.054 "large_cache_size": 16, 00:28:32.054 "sequence_count": 2048, 00:28:32.054 "small_cache_size": 128, 00:28:32.054 "task_count": 2048 00:28:32.054 } 00:28:32.054 } 00:28:32.054 ] 00:28:32.054 }, 00:28:32.054 { 00:28:32.054 "subsystem": "bdev", 00:28:32.054 "config": [ 00:28:32.054 { 00:28:32.054 "method": "bdev_set_options", 00:28:32.054 "params": { 00:28:32.054 "bdev_auto_examine": true, 00:28:32.054 "bdev_io_cache_size": 256, 00:28:32.054 "bdev_io_pool_size": 65535, 00:28:32.054 "iobuf_large_cache_size": 16, 00:28:32.054 "iobuf_small_cache_size": 128 00:28:32.054 } 00:28:32.054 }, 00:28:32.054 { 00:28:32.054 "method": "bdev_raid_set_options", 00:28:32.054 "params": { 00:28:32.054 "process_max_bandwidth_mb_sec": 0, 00:28:32.054 "process_window_size_kb": 1024 00:28:32.054 } 00:28:32.054 }, 00:28:32.054 { 00:28:32.054 "method": "bdev_iscsi_set_options", 00:28:32.054 "params": { 00:28:32.054 "timeout_sec": 30 00:28:32.054 } 00:28:32.054 }, 00:28:32.054 { 00:28:32.054 "method": "bdev_nvme_set_options", 00:28:32.054 "params": { 00:28:32.054 "action_on_timeout": "none", 00:28:32.054 "allow_accel_sequence": false, 00:28:32.054 "arbitration_burst": 0, 00:28:32.054 "bdev_retry_count": 3, 00:28:32.054 "ctrlr_loss_timeout_sec": 0, 00:28:32.054 "delay_cmd_submit": true, 00:28:32.054 "dhchap_dhgroups": [ 00:28:32.054 "null", 00:28:32.054 "ffdhe2048", 00:28:32.054 "ffdhe3072", 00:28:32.054 "ffdhe4096", 00:28:32.054 "ffdhe6144", 00:28:32.054 "ffdhe8192" 00:28:32.054 ], 00:28:32.054 "dhchap_digests": [ 00:28:32.054 "sha256", 00:28:32.054 "sha384", 00:28:32.054 "sha512" 00:28:32.054 ], 00:28:32.054 "disable_auto_failback": false, 00:28:32.054 "fast_io_fail_timeout_sec": 0, 00:28:32.054 "generate_uuids": false, 00:28:32.054 "high_priority_weight": 0, 00:28:32.054 "io_path_stat": false, 00:28:32.054 "io_queue_requests": 512, 00:28:32.054 "keep_alive_timeout_ms": 10000, 00:28:32.054 "low_priority_weight": 0, 00:28:32.054 "medium_priority_weight": 0, 00:28:32.054 "nvme_adminq_poll_period_us": 10000, 00:28:32.054 "nvme_error_stat": false, 00:28:32.054 "nvme_ioq_poll_period_us": 0, 00:28:32.054 "rdma_cm_event_timeout_ms": 0, 00:28:32.054 "rdma_max_cq_size": 0, 00:28:32.054 "rdma_srq_size": 0, 00:28:32.054 "reconnect_delay_sec": 0, 00:28:32.054 "timeout_admin_us": 0, 00:28:32.054 "timeout_us": 0, 00:28:32.054 "transport_ack_timeout": 0, 00:28:32.054 "transport_retry_count": 4, 00:28:32.054 "transport_tos": 0 00:28:32.054 } 00:28:32.054 }, 00:28:32.054 { 00:28:32.054 "method": "bdev_nvme_attach_controller", 00:28:32.054 "params": { 00:28:32.054 "adrfam": "IPv4", 00:28:32.054 "ctrlr_loss_timeout_sec": 0, 00:28:32.054 "ddgst": false, 00:28:32.054 "fast_io_fail_timeout_sec": 0, 00:28:32.054 "hdgst": false, 00:28:32.054 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:32.054 "multipath": "multipath", 00:28:32.054 "name": "nvme0", 00:28:32.054 "prchk_guard": false, 00:28:32.054 "prchk_reftag": false, 00:28:32.054 "psk": "key0", 00:28:32.054 "reconnect_delay_sec": 0, 00:28:32.054 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:32.054 "traddr": "127.0.0.1", 00:28:32.054 "trsvcid": "4420", 00:28:32.054 "trtype": "TCP" 00:28:32.054 } 00:28:32.054 }, 00:28:32.054 { 00:28:32.054 "method": "bdev_nvme_set_hotplug", 00:28:32.054 "params": { 00:28:32.054 "enable": false, 00:28:32.054 "period_us": 100000 00:28:32.054 } 00:28:32.054 }, 00:28:32.054 { 00:28:32.054 "method": "bdev_wait_for_examine" 00:28:32.054 } 00:28:32.054 ] 00:28:32.054 }, 00:28:32.054 { 00:28:32.054 "subsystem": "nbd", 00:28:32.054 "config": [] 00:28:32.054 } 00:28:32.054 ] 00:28:32.054 }' 00:28:32.054 17:27:24 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:32.054 17:27:24 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:32.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:32.054 17:27:24 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:32.054 17:27:24 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:32.054 17:27:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:32.054 [2024-11-19 17:27:24.484291] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:28:32.054 [2024-11-19 17:27:24.485147] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111248 ] 00:28:32.313 [2024-11-19 17:27:24.628807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.313 [2024-11-19 17:27:24.670134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.572 [2024-11-19 17:27:24.880307] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:33.139 17:27:25 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.139 17:27:25 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:28:33.139 17:27:25 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:28:33.139 17:27:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:33.139 17:27:25 keyring_file -- keyring/file.sh@121 -- # jq length 00:28:33.397 17:27:25 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:28:33.397 17:27:25 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:28:33.397 17:27:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:33.397 17:27:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:33.397 17:27:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:33.397 17:27:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:33.397 17:27:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:33.655 17:27:26 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:28:33.655 17:27:26 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:28:33.656 17:27:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:33.656 17:27:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:33.656 17:27:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:33.656 17:27:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:33.656 17:27:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:33.914 17:27:26 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:28:33.914 17:27:26 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:28:33.914 17:27:26 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:28:33.914 17:27:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:28:34.173 17:27:26 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:28:34.173 17:27:26 keyring_file -- keyring/file.sh@1 -- # cleanup 00:28:34.173 17:27:26 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.2X4Pwnz3DZ /tmp/tmp.GBRi6ajnlB 00:28:34.173 17:27:26 keyring_file -- keyring/file.sh@20 -- # killprocess 111248 00:28:34.173 17:27:26 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 111248 ']' 00:28:34.173 17:27:26 keyring_file -- common/autotest_common.sh@958 -- # kill -0 111248 00:28:34.173 17:27:26 keyring_file -- common/autotest_common.sh@959 -- # uname 00:28:34.173 17:27:26 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:34.173 17:27:26 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111248 00:28:34.173 17:27:26 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:34.173 17:27:26 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:34.173 killing process with pid 111248 00:28:34.173 17:27:26 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111248' 00:28:34.173 Received shutdown signal, test time was about 1.000000 seconds 00:28:34.173 00:28:34.173 Latency(us) 00:28:34.173 [2024-11-19T17:27:26.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.173 [2024-11-19T17:27:26.706Z] =================================================================================================================== 00:28:34.173 [2024-11-19T17:27:26.706Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:34.173 17:27:26 keyring_file -- common/autotest_common.sh@973 -- # kill 111248 00:28:34.173 17:27:26 keyring_file -- common/autotest_common.sh@978 -- # wait 111248 00:28:34.433 17:27:26 keyring_file -- keyring/file.sh@21 -- # killprocess 110767 00:28:34.433 17:27:26 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 110767 ']' 00:28:34.433 17:27:26 keyring_file -- common/autotest_common.sh@958 -- # kill -0 110767 00:28:34.433 17:27:26 keyring_file -- common/autotest_common.sh@959 -- # uname 00:28:34.433 17:27:26 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:34.433 17:27:26 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110767 00:28:34.433 17:27:26 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:34.433 17:27:26 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:34.433 killing process with pid 110767 00:28:34.433 17:27:26 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110767' 00:28:34.433 17:27:26 keyring_file -- common/autotest_common.sh@973 -- # kill 110767 00:28:34.433 17:27:26 keyring_file -- common/autotest_common.sh@978 -- # wait 110767 00:28:35.000 00:28:35.000 real 0m15.260s 00:28:35.000 user 0m37.934s 00:28:35.000 sys 0m3.352s 00:28:35.000 17:27:27 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:35.000 ************************************ 00:28:35.000 END TEST keyring_file 00:28:35.000 17:27:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:35.000 ************************************ 00:28:35.000 17:27:27 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:28:35.000 17:27:27 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:28:35.000 17:27:27 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:35.000 17:27:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:35.000 17:27:27 -- common/autotest_common.sh@10 -- # set +x 00:28:35.000 ************************************ 00:28:35.000 START TEST keyring_linux 00:28:35.000 ************************************ 00:28:35.000 17:27:27 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:28:35.000 Joined session keyring: 864311217 00:28:35.000 * Looking for test storage... 00:28:35.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:28:35.000 17:27:27 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:35.000 17:27:27 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:28:35.000 17:27:27 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:35.260 17:27:27 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@345 -- # : 1 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@368 -- # return 0 00:28:35.260 17:27:27 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:35.260 17:27:27 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:35.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:35.260 --rc genhtml_branch_coverage=1 00:28:35.260 --rc genhtml_function_coverage=1 00:28:35.260 --rc genhtml_legend=1 00:28:35.260 --rc geninfo_all_blocks=1 00:28:35.260 --rc geninfo_unexecuted_blocks=1 00:28:35.260 00:28:35.260 ' 00:28:35.260 17:27:27 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:35.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:35.260 --rc genhtml_branch_coverage=1 00:28:35.260 --rc genhtml_function_coverage=1 00:28:35.260 --rc genhtml_legend=1 00:28:35.260 --rc geninfo_all_blocks=1 00:28:35.260 --rc geninfo_unexecuted_blocks=1 00:28:35.260 00:28:35.260 ' 00:28:35.260 17:27:27 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:35.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:35.260 --rc genhtml_branch_coverage=1 00:28:35.260 --rc genhtml_function_coverage=1 00:28:35.260 --rc genhtml_legend=1 00:28:35.260 --rc geninfo_all_blocks=1 00:28:35.260 --rc geninfo_unexecuted_blocks=1 00:28:35.260 00:28:35.260 ' 00:28:35.260 17:27:27 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:35.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:35.260 --rc genhtml_branch_coverage=1 00:28:35.260 --rc genhtml_function_coverage=1 00:28:35.260 --rc genhtml_legend=1 00:28:35.260 --rc geninfo_all_blocks=1 00:28:35.260 --rc geninfo_unexecuted_blocks=1 00:28:35.260 00:28:35.260 ' 00:28:35.260 17:27:27 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:28:35.260 17:27:27 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:35.260 17:27:27 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:28:35.260 17:27:27 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:35.260 17:27:27 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:35.260 17:27:27 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:35.260 17:27:27 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:35.260 17:27:27 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:35.260 17:27:27 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:35.260 17:27:27 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:35.260 17:27:27 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:35.260 17:27:27 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:35.260 17:27:27 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:35.260 17:27:27 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:838cb2d2-b846-4910-b58d-4ab4444a5e07 00:28:35.260 17:27:27 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=838cb2d2-b846-4910-b58d-4ab4444a5e07 00:28:35.260 17:27:27 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:35.260 17:27:27 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:35.260 17:27:27 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:35.260 17:27:27 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:35.260 17:27:27 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:35.260 17:27:27 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:35.261 17:27:27 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.261 17:27:27 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.261 17:27:27 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.261 17:27:27 keyring_linux -- paths/export.sh@5 -- # export PATH 00:28:35.261 17:27:27 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.261 17:27:27 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:28:35.261 17:27:27 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:35.261 17:27:27 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:35.261 17:27:27 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:35.261 17:27:27 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:35.261 17:27:27 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:35.261 17:27:27 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:35.261 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:35.261 17:27:27 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:35.261 17:27:27 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:35.261 17:27:27 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:35.261 17:27:27 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:35.261 17:27:27 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:35.261 17:27:27 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:35.261 17:27:27 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:28:35.261 17:27:27 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:28:35.261 17:27:27 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:28:35.261 17:27:27 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:28:35.261 17:27:27 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:35.261 17:27:27 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:28:35.261 17:27:27 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:35.261 17:27:27 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:35.261 17:27:27 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:28:35.261 17:27:27 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:35.261 17:27:27 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:35.261 17:27:27 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:28:35.261 17:27:27 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:35.261 17:27:27 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:28:35.261 17:27:27 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:28:35.261 17:27:27 keyring_linux -- nvmf/common.sh@733 -- # python - 00:28:35.261 17:27:27 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:28:35.261 /tmp/:spdk-test:key0 00:28:35.261 17:27:27 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:28:35.261 17:27:27 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:28:35.261 17:27:27 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:35.261 17:27:27 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:28:35.261 17:27:27 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:35.261 17:27:27 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:35.261 17:27:27 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:28:35.261 17:27:27 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:35.261 17:27:27 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:35.261 17:27:27 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:28:35.261 17:27:27 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:35.261 17:27:27 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:28:35.261 17:27:27 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:28:35.261 17:27:27 keyring_linux -- nvmf/common.sh@733 -- # python - 00:28:35.261 17:27:27 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:28:35.261 /tmp/:spdk-test:key1 00:28:35.261 17:27:27 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:28:35.261 17:27:27 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=111408 00:28:35.261 17:27:27 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 111408 00:28:35.261 17:27:27 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 111408 ']' 00:28:35.261 17:27:27 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:35.261 17:27:27 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:35.261 17:27:27 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:35.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:35.261 17:27:27 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:35.261 17:27:27 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:35.261 17:27:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:35.261 [2024-11-19 17:27:27.770992] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:28:35.261 [2024-11-19 17:27:27.771097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111408 ] 00:28:35.520 [2024-11-19 17:27:27.916286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.520 [2024-11-19 17:27:27.964853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.778 17:27:28 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:35.778 17:27:28 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:28:35.778 17:27:28 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:28:35.778 17:27:28 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.778 17:27:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:36.038 [2024-11-19 17:27:28.312035] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:36.038 null0 00:28:36.038 [2024-11-19 17:27:28.344006] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:36.038 [2024-11-19 17:27:28.344220] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:36.038 17:27:28 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.038 17:27:28 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:28:36.038 627760411 00:28:36.038 17:27:28 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:28:36.038 885169384 00:28:36.038 17:27:28 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=111435 00:28:36.038 17:27:28 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:28:36.038 17:27:28 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 111435 /var/tmp/bperf.sock 00:28:36.038 17:27:28 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 111435 ']' 00:28:36.038 17:27:28 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:36.038 17:27:28 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:36.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:36.038 17:27:28 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:36.038 17:27:28 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:36.038 17:27:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:36.038 [2024-11-19 17:27:28.434050] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:28:36.038 [2024-11-19 17:27:28.434141] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111435 ] 00:28:36.297 [2024-11-19 17:27:28.586414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.297 [2024-11-19 17:27:28.647143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.297 17:27:28 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:36.297 17:27:28 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:28:36.297 17:27:28 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:28:36.297 17:27:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:28:36.555 17:27:28 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:28:36.555 17:27:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:36.813 17:27:29 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:36.813 17:27:29 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:37.071 [2024-11-19 17:27:29.488228] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:37.071 nvme0n1 00:28:37.071 17:27:29 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:28:37.071 17:27:29 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:28:37.071 17:27:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:37.071 17:27:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:37.071 17:27:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:37.071 17:27:29 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:37.331 17:27:29 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:28:37.331 17:27:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:37.331 17:27:29 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:28:37.331 17:27:29 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:28:37.331 17:27:29 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:37.331 17:27:29 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:37.331 17:27:29 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:28:37.899 17:27:30 keyring_linux -- keyring/linux.sh@25 -- # sn=627760411 00:28:37.899 17:27:30 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:28:37.899 17:27:30 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:37.899 17:27:30 keyring_linux -- keyring/linux.sh@26 -- # [[ 627760411 == \6\2\7\7\6\0\4\1\1 ]] 00:28:37.899 17:27:30 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 627760411 00:28:37.899 17:27:30 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:28:37.899 17:27:30 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:37.899 Running I/O for 1 seconds... 00:28:38.835 13191.00 IOPS, 51.53 MiB/s 00:28:38.835 Latency(us) 00:28:38.835 [2024-11-19T17:27:31.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.835 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:38.835 nvme0n1 : 1.01 13231.07 51.68 0.00 0.00 9641.40 5421.61 16562.73 00:28:38.835 [2024-11-19T17:27:31.368Z] =================================================================================================================== 00:28:38.835 [2024-11-19T17:27:31.368Z] Total : 13231.07 51.68 0.00 0.00 9641.40 5421.61 16562.73 00:28:38.835 { 00:28:38.835 "results": [ 00:28:38.835 { 00:28:38.835 "job": "nvme0n1", 00:28:38.835 "core_mask": "0x2", 00:28:38.835 "workload": "randread", 00:28:38.835 "status": "finished", 00:28:38.835 "queue_depth": 128, 00:28:38.835 "io_size": 4096, 00:28:38.835 "runtime": 1.006721, 00:28:38.835 "iops": 13231.073951968818, 00:28:38.835 "mibps": 51.683882624878194, 00:28:38.835 "io_failed": 0, 00:28:38.835 "io_timeout": 0, 00:28:38.835 "avg_latency_us": 9641.399176631177, 00:28:38.835 "min_latency_us": 5421.614545454546, 00:28:38.835 "max_latency_us": 16562.734545454547 00:28:38.835 } 00:28:38.835 ], 00:28:38.835 "core_count": 1 00:28:38.835 } 00:28:38.835 17:27:31 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:38.835 17:27:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:39.094 17:27:31 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:28:39.094 17:27:31 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:28:39.094 17:27:31 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:39.094 17:27:31 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:39.094 17:27:31 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:39.094 17:27:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:39.353 17:27:31 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:28:39.353 17:27:31 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:39.353 17:27:31 keyring_linux -- keyring/linux.sh@23 -- # return 00:28:39.353 17:27:31 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:39.353 17:27:31 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:28:39.353 17:27:31 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:39.353 17:27:31 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:39.353 17:27:31 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:39.353 17:27:31 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:39.353 17:27:31 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:39.353 17:27:31 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:39.353 17:27:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:39.921 [2024-11-19 17:27:32.217348] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:39.921 [2024-11-19 17:27:32.217423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23dbf50 (107): Transport endpoint is not connected 00:28:39.921 [2024-11-19 17:27:32.218412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23dbf50 (9): Bad file descriptor 00:28:39.921 [2024-11-19 17:27:32.219408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:28:39.921 [2024-11-19 17:27:32.219433] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:39.921 [2024-11-19 17:27:32.219443] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:28:39.921 [2024-11-19 17:27:32.219456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:28:39.921 2024/11/19 17:27:32 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:28:39.921 request: 00:28:39.921 { 00:28:39.921 "method": "bdev_nvme_attach_controller", 00:28:39.921 "params": { 00:28:39.921 "name": "nvme0", 00:28:39.921 "trtype": "tcp", 00:28:39.921 "traddr": "127.0.0.1", 00:28:39.921 "adrfam": "ipv4", 00:28:39.921 "trsvcid": "4420", 00:28:39.921 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:39.921 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:39.921 "prchk_reftag": false, 00:28:39.921 "prchk_guard": false, 00:28:39.921 "hdgst": false, 00:28:39.921 "ddgst": false, 00:28:39.921 "psk": ":spdk-test:key1", 00:28:39.922 "allow_unrecognized_csi": false 00:28:39.922 } 00:28:39.922 } 00:28:39.922 Got JSON-RPC error response 00:28:39.922 GoRPCClient: error on JSON-RPC call 00:28:39.922 17:27:32 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:28:39.922 17:27:32 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:39.922 17:27:32 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:39.922 17:27:32 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:39.922 17:27:32 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:28:39.922 17:27:32 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:39.922 17:27:32 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:28:39.922 17:27:32 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:28:39.922 17:27:32 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:28:39.922 17:27:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:39.922 17:27:32 keyring_linux -- keyring/linux.sh@33 -- # sn=627760411 00:28:39.922 17:27:32 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 627760411 00:28:39.922 1 links removed 00:28:39.922 17:27:32 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:39.922 17:27:32 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:28:39.922 17:27:32 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:28:39.922 17:27:32 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:28:39.922 17:27:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:28:39.922 17:27:32 keyring_linux -- keyring/linux.sh@33 -- # sn=885169384 00:28:39.922 17:27:32 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 885169384 00:28:39.922 1 links removed 00:28:39.922 17:27:32 keyring_linux -- keyring/linux.sh@41 -- # killprocess 111435 00:28:39.922 17:27:32 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 111435 ']' 00:28:39.922 17:27:32 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 111435 00:28:39.922 17:27:32 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:28:39.922 17:27:32 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:39.922 17:27:32 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111435 00:28:39.922 17:27:32 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:39.922 17:27:32 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:39.922 killing process with pid 111435 00:28:39.922 17:27:32 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111435' 00:28:39.922 17:27:32 keyring_linux -- common/autotest_common.sh@973 -- # kill 111435 00:28:39.922 Received shutdown signal, test time was about 1.000000 seconds 00:28:39.922 00:28:39.922 Latency(us) 00:28:39.922 [2024-11-19T17:27:32.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.922 [2024-11-19T17:27:32.455Z] =================================================================================================================== 00:28:39.922 [2024-11-19T17:27:32.455Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:39.922 17:27:32 keyring_linux -- common/autotest_common.sh@978 -- # wait 111435 00:28:40.180 17:27:32 keyring_linux -- keyring/linux.sh@42 -- # killprocess 111408 00:28:40.180 17:27:32 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 111408 ']' 00:28:40.180 17:27:32 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 111408 00:28:40.180 17:27:32 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:28:40.180 17:27:32 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:40.180 17:27:32 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111408 00:28:40.180 17:27:32 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:40.180 17:27:32 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:40.180 killing process with pid 111408 00:28:40.180 17:27:32 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111408' 00:28:40.180 17:27:32 keyring_linux -- common/autotest_common.sh@973 -- # kill 111408 00:28:40.180 17:27:32 keyring_linux -- common/autotest_common.sh@978 -- # wait 111408 00:28:40.747 00:28:40.747 real 0m5.686s 00:28:40.747 user 0m10.922s 00:28:40.747 sys 0m1.689s 00:28:40.747 17:27:33 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:40.747 17:27:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:40.747 ************************************ 00:28:40.747 END TEST keyring_linux 00:28:40.747 ************************************ 00:28:40.747 17:27:33 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:28:40.747 17:27:33 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:28:40.747 17:27:33 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:28:40.747 17:27:33 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:28:40.747 17:27:33 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:28:40.747 17:27:33 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:28:40.747 17:27:33 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:28:40.747 17:27:33 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:28:40.747 17:27:33 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:28:40.747 17:27:33 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:28:40.747 17:27:33 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:28:40.747 17:27:33 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:28:40.747 17:27:33 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:28:40.747 17:27:33 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:28:40.747 17:27:33 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:28:40.747 17:27:33 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:28:40.747 17:27:33 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:28:40.747 17:27:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:40.747 17:27:33 -- common/autotest_common.sh@10 -- # set +x 00:28:40.747 17:27:33 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:28:40.747 17:27:33 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:28:40.747 17:27:33 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:28:40.747 17:27:33 -- common/autotest_common.sh@10 -- # set +x 00:28:42.650 INFO: APP EXITING 00:28:42.650 INFO: killing all VMs 00:28:42.650 INFO: killing vhost app 00:28:42.650 INFO: EXIT DONE 00:28:43.216 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:43.475 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:43.475 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:44.043 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:44.043 Cleaning 00:28:44.043 Removing: /var/run/dpdk/spdk0/config 00:28:44.043 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:44.043 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:44.043 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:44.043 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:44.043 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:44.043 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:44.043 Removing: /var/run/dpdk/spdk1/config 00:28:44.043 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:44.043 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:44.043 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:44.043 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:44.043 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:44.043 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:44.043 Removing: /var/run/dpdk/spdk2/config 00:28:44.043 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:44.043 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:44.303 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:44.303 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:44.303 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:44.303 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:44.303 Removing: /var/run/dpdk/spdk3/config 00:28:44.303 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:44.303 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:44.303 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:44.303 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:44.303 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:44.303 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:44.303 Removing: /var/run/dpdk/spdk4/config 00:28:44.303 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:44.303 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:44.303 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:44.303 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:44.303 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:44.303 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:44.303 Removing: /dev/shm/nvmf_trace.0 00:28:44.303 Removing: /dev/shm/spdk_tgt_trace.pid58482 00:28:44.303 Removing: /var/run/dpdk/spdk0 00:28:44.303 Removing: /var/run/dpdk/spdk1 00:28:44.303 Removing: /var/run/dpdk/spdk2 00:28:44.303 Removing: /var/run/dpdk/spdk3 00:28:44.303 Removing: /var/run/dpdk/spdk4 00:28:44.303 Removing: /var/run/dpdk/spdk_pid101154 00:28:44.303 Removing: /var/run/dpdk/spdk_pid101200 00:28:44.303 Removing: /var/run/dpdk/spdk_pid101539 00:28:44.303 Removing: /var/run/dpdk/spdk_pid101585 00:28:44.303 Removing: /var/run/dpdk/spdk_pid101983 00:28:44.303 Removing: /var/run/dpdk/spdk_pid102541 00:28:44.303 Removing: /var/run/dpdk/spdk_pid102957 00:28:44.303 Removing: /var/run/dpdk/spdk_pid103965 00:28:44.303 Removing: /var/run/dpdk/spdk_pid105019 00:28:44.303 Removing: /var/run/dpdk/spdk_pid105137 00:28:44.303 Removing: /var/run/dpdk/spdk_pid105194 00:28:44.303 Removing: /var/run/dpdk/spdk_pid106781 00:28:44.303 Removing: /var/run/dpdk/spdk_pid107083 00:28:44.303 Removing: /var/run/dpdk/spdk_pid107438 00:28:44.303 Removing: /var/run/dpdk/spdk_pid108007 00:28:44.303 Removing: /var/run/dpdk/spdk_pid108012 00:28:44.303 Removing: /var/run/dpdk/spdk_pid108431 00:28:44.303 Removing: /var/run/dpdk/spdk_pid108584 00:28:44.303 Removing: /var/run/dpdk/spdk_pid108740 00:28:44.303 Removing: /var/run/dpdk/spdk_pid108840 00:28:44.303 Removing: /var/run/dpdk/spdk_pid109090 00:28:44.303 Removing: /var/run/dpdk/spdk_pid109198 00:28:44.303 Removing: /var/run/dpdk/spdk_pid109912 00:28:44.303 Removing: /var/run/dpdk/spdk_pid109943 00:28:44.303 Removing: /var/run/dpdk/spdk_pid109983 00:28:44.303 Removing: /var/run/dpdk/spdk_pid110234 00:28:44.303 Removing: /var/run/dpdk/spdk_pid110269 00:28:44.303 Removing: /var/run/dpdk/spdk_pid110299 00:28:44.303 Removing: /var/run/dpdk/spdk_pid110767 00:28:44.303 Removing: /var/run/dpdk/spdk_pid110790 00:28:44.303 Removing: /var/run/dpdk/spdk_pid111248 00:28:44.303 Removing: /var/run/dpdk/spdk_pid111408 00:28:44.303 Removing: /var/run/dpdk/spdk_pid111435 00:28:44.303 Removing: /var/run/dpdk/spdk_pid58323 00:28:44.303 Removing: /var/run/dpdk/spdk_pid58482 00:28:44.304 Removing: /var/run/dpdk/spdk_pid58756 00:28:44.304 Removing: /var/run/dpdk/spdk_pid58843 00:28:44.304 Removing: /var/run/dpdk/spdk_pid58875 00:28:44.304 Removing: /var/run/dpdk/spdk_pid58990 00:28:44.304 Removing: /var/run/dpdk/spdk_pid59006 00:28:44.304 Removing: /var/run/dpdk/spdk_pid59146 00:28:44.304 Removing: /var/run/dpdk/spdk_pid59432 00:28:44.304 Removing: /var/run/dpdk/spdk_pid59620 00:28:44.304 Removing: /var/run/dpdk/spdk_pid59718 00:28:44.304 Removing: /var/run/dpdk/spdk_pid59819 00:28:44.304 Removing: /var/run/dpdk/spdk_pid59908 00:28:44.304 Removing: /var/run/dpdk/spdk_pid59947 00:28:44.304 Removing: /var/run/dpdk/spdk_pid59977 00:28:44.304 Removing: /var/run/dpdk/spdk_pid60052 00:28:44.304 Removing: /var/run/dpdk/spdk_pid60144 00:28:44.304 Removing: /var/run/dpdk/spdk_pid60785 00:28:44.304 Removing: /var/run/dpdk/spdk_pid60841 00:28:44.304 Removing: /var/run/dpdk/spdk_pid60910 00:28:44.304 Removing: /var/run/dpdk/spdk_pid60938 00:28:44.304 Removing: /var/run/dpdk/spdk_pid61024 00:28:44.304 Removing: /var/run/dpdk/spdk_pid61052 00:28:44.304 Removing: /var/run/dpdk/spdk_pid61136 00:28:44.563 Removing: /var/run/dpdk/spdk_pid61164 00:28:44.563 Removing: /var/run/dpdk/spdk_pid61221 00:28:44.563 Removing: /var/run/dpdk/spdk_pid61251 00:28:44.563 Removing: /var/run/dpdk/spdk_pid61303 00:28:44.563 Removing: /var/run/dpdk/spdk_pid61333 00:28:44.563 Removing: /var/run/dpdk/spdk_pid61493 00:28:44.563 Removing: /var/run/dpdk/spdk_pid61528 00:28:44.563 Removing: /var/run/dpdk/spdk_pid61611 00:28:44.563 Removing: /var/run/dpdk/spdk_pid62095 00:28:44.563 Removing: /var/run/dpdk/spdk_pid62478 00:28:44.563 Removing: /var/run/dpdk/spdk_pid65004 00:28:44.563 Removing: /var/run/dpdk/spdk_pid65045 00:28:44.563 Removing: /var/run/dpdk/spdk_pid65409 00:28:44.563 Removing: /var/run/dpdk/spdk_pid65460 00:28:44.563 Removing: /var/run/dpdk/spdk_pid65865 00:28:44.563 Removing: /var/run/dpdk/spdk_pid66437 00:28:44.563 Removing: /var/run/dpdk/spdk_pid66889 00:28:44.563 Removing: /var/run/dpdk/spdk_pid67969 00:28:44.563 Removing: /var/run/dpdk/spdk_pid69056 00:28:44.563 Removing: /var/run/dpdk/spdk_pid69179 00:28:44.563 Removing: /var/run/dpdk/spdk_pid69241 00:28:44.563 Removing: /var/run/dpdk/spdk_pid70874 00:28:44.563 Removing: /var/run/dpdk/spdk_pid71222 00:28:44.563 Removing: /var/run/dpdk/spdk_pid75074 00:28:44.563 Removing: /var/run/dpdk/spdk_pid75492 00:28:44.563 Removing: /var/run/dpdk/spdk_pid76121 00:28:44.563 Removing: /var/run/dpdk/spdk_pid76653 00:28:44.563 Removing: /var/run/dpdk/spdk_pid82453 00:28:44.563 Removing: /var/run/dpdk/spdk_pid82964 00:28:44.563 Removing: /var/run/dpdk/spdk_pid83069 00:28:44.563 Removing: /var/run/dpdk/spdk_pid83227 00:28:44.563 Removing: /var/run/dpdk/spdk_pid83272 00:28:44.563 Removing: /var/run/dpdk/spdk_pid83311 00:28:44.563 Removing: /var/run/dpdk/spdk_pid83350 00:28:44.563 Removing: /var/run/dpdk/spdk_pid83513 00:28:44.563 Removing: /var/run/dpdk/spdk_pid83674 00:28:44.563 Removing: /var/run/dpdk/spdk_pid83924 00:28:44.563 Removing: /var/run/dpdk/spdk_pid84040 00:28:44.563 Removing: /var/run/dpdk/spdk_pid84311 00:28:44.563 Removing: /var/run/dpdk/spdk_pid84426 00:28:44.563 Removing: /var/run/dpdk/spdk_pid84547 00:28:44.563 Removing: /var/run/dpdk/spdk_pid84941 00:28:44.563 Removing: /var/run/dpdk/spdk_pid85409 00:28:44.563 Removing: /var/run/dpdk/spdk_pid85410 00:28:44.563 Removing: /var/run/dpdk/spdk_pid85411 00:28:44.563 Removing: /var/run/dpdk/spdk_pid85702 00:28:44.563 Removing: /var/run/dpdk/spdk_pid85990 00:28:44.563 Removing: /var/run/dpdk/spdk_pid86408 00:28:44.563 Removing: /var/run/dpdk/spdk_pid86760 00:28:44.563 Removing: /var/run/dpdk/spdk_pid87342 00:28:44.563 Removing: /var/run/dpdk/spdk_pid87348 00:28:44.563 Removing: /var/run/dpdk/spdk_pid87731 00:28:44.563 Removing: /var/run/dpdk/spdk_pid87745 00:28:44.563 Removing: /var/run/dpdk/spdk_pid87765 00:28:44.563 Removing: /var/run/dpdk/spdk_pid87790 00:28:44.563 Removing: /var/run/dpdk/spdk_pid87798 00:28:44.563 Removing: /var/run/dpdk/spdk_pid88205 00:28:44.563 Removing: /var/run/dpdk/spdk_pid88248 00:28:44.563 Removing: /var/run/dpdk/spdk_pid88641 00:28:44.563 Removing: /var/run/dpdk/spdk_pid88884 00:28:44.563 Removing: /var/run/dpdk/spdk_pid89404 00:28:44.563 Removing: /var/run/dpdk/spdk_pid90039 00:28:44.563 Removing: /var/run/dpdk/spdk_pid91494 00:28:44.563 Removing: /var/run/dpdk/spdk_pid92129 00:28:44.563 Removing: /var/run/dpdk/spdk_pid92137 00:28:44.563 Removing: /var/run/dpdk/spdk_pid94169 00:28:44.563 Removing: /var/run/dpdk/spdk_pid94246 00:28:44.563 Removing: /var/run/dpdk/spdk_pid94323 00:28:44.563 Removing: /var/run/dpdk/spdk_pid94396 00:28:44.563 Removing: /var/run/dpdk/spdk_pid94544 00:28:44.563 Removing: /var/run/dpdk/spdk_pid94621 00:28:44.563 Removing: /var/run/dpdk/spdk_pid94706 00:28:44.563 Removing: /var/run/dpdk/spdk_pid94783 00:28:44.563 Removing: /var/run/dpdk/spdk_pid95173 00:28:44.563 Removing: /var/run/dpdk/spdk_pid95942 00:28:44.563 Removing: /var/run/dpdk/spdk_pid97358 00:28:44.563 Removing: /var/run/dpdk/spdk_pid97546 00:28:44.563 Removing: /var/run/dpdk/spdk_pid97838 00:28:44.563 Removing: /var/run/dpdk/spdk_pid98372 00:28:44.563 Removing: /var/run/dpdk/spdk_pid98730 00:28:44.563 Clean 00:28:44.822 17:27:37 -- common/autotest_common.sh@1453 -- # return 0 00:28:44.822 17:27:37 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:28:44.822 17:27:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:44.822 17:27:37 -- common/autotest_common.sh@10 -- # set +x 00:28:44.822 17:27:37 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:28:44.822 17:27:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:44.822 17:27:37 -- common/autotest_common.sh@10 -- # set +x 00:28:44.822 17:27:37 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:44.822 17:27:37 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:44.822 17:27:37 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:44.822 17:27:37 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:28:44.822 17:27:37 -- spdk/autotest.sh@398 -- # hostname 00:28:44.822 17:27:37 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:45.080 geninfo: WARNING: invalid characters removed from testname! 00:29:07.109 17:27:59 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:10.395 17:28:02 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:12.926 17:28:05 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:15.460 17:28:07 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:18.002 17:28:10 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:20.532 17:28:12 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:23.064 17:28:15 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:23.064 17:28:15 -- spdk/autorun.sh@1 -- $ timing_finish 00:29:23.064 17:28:15 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:29:23.064 17:28:15 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:23.065 17:28:15 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:29:23.065 17:28:15 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:23.065 + [[ -n 5260 ]] 00:29:23.065 + sudo kill 5260 00:29:23.074 [Pipeline] } 00:29:23.090 [Pipeline] // timeout 00:29:23.096 [Pipeline] } 00:29:23.111 [Pipeline] // stage 00:29:23.117 [Pipeline] } 00:29:23.131 [Pipeline] // catchError 00:29:23.141 [Pipeline] stage 00:29:23.143 [Pipeline] { (Stop VM) 00:29:23.156 [Pipeline] sh 00:29:23.436 + vagrant halt 00:29:26.747 ==> default: Halting domain... 00:29:32.031 [Pipeline] sh 00:29:32.316 + vagrant destroy -f 00:29:34.851 ==> default: Removing domain... 00:29:35.142 [Pipeline] sh 00:29:35.426 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/output 00:29:35.435 [Pipeline] } 00:29:35.454 [Pipeline] // stage 00:29:35.460 [Pipeline] } 00:29:35.478 [Pipeline] // dir 00:29:35.483 [Pipeline] } 00:29:35.500 [Pipeline] // wrap 00:29:35.506 [Pipeline] } 00:29:35.520 [Pipeline] // catchError 00:29:35.530 [Pipeline] stage 00:29:35.532 [Pipeline] { (Epilogue) 00:29:35.546 [Pipeline] sh 00:29:35.831 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:41.113 [Pipeline] catchError 00:29:41.116 [Pipeline] { 00:29:41.129 [Pipeline] sh 00:29:41.409 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:41.409 Artifacts sizes are good 00:29:41.418 [Pipeline] } 00:29:41.434 [Pipeline] // catchError 00:29:41.446 [Pipeline] archiveArtifacts 00:29:41.454 Archiving artifacts 00:29:41.591 [Pipeline] cleanWs 00:29:41.602 [WS-CLEANUP] Deleting project workspace... 00:29:41.603 [WS-CLEANUP] Deferred wipeout is used... 00:29:41.617 [WS-CLEANUP] done 00:29:41.618 [Pipeline] } 00:29:41.632 [Pipeline] // stage 00:29:41.636 [Pipeline] } 00:29:41.649 [Pipeline] // node 00:29:41.653 [Pipeline] End of Pipeline 00:29:41.689 Finished: SUCCESS